• Matthew Ball writes about why it can seem like AR/VR technology is perpetually “only a few years away” from mass adoption:

    As we observe the state of XR in 2023, it’s fair to say the technology has proved harder than many of the best-informed and most financially endowed companies expected. When it unveiled Google Glass, Google suggested that annual sales could reach the tens of millions by 2015, with the goal of appealing to the nearly 80% of people who wear glasses daily. Though Google continues to build AR devices, Glass was an infamous flop, with sales in the tens of thousands.

    […]

    Throughout 2015 and 2016, Mark Zuckerberg repeated his belief that within a decade, “normal-looking” AR glasses might be a part of daily life. Now it looks like Facebook won’t launch a dedicated AR headset by 2025—let alone an edition that hundreds of millions might want.

    […]

    In 2016, Epic Games founder/CEO Tim Sweeney predicted not only that within five to seven years, we would have not just PC-grade VR devices but also that these devices would have shrunk down into Oakley-style sunglasses.

    It will be interesting to see how the release of Apple’s first mixed reality headset, rumored for later this year, will move the needle on this.

  • Nico Grant, reporting for The New York Times:

    Last month, Larry Page and Sergey Brin, Google’s founders, held several meetings with company executives. The topic: a rival’s new chatbot… [ChatGPT] has shaken Google out of its routine… Google now intends to unveil more than 20 new products and demonstrate a version of its search engine with chatbot features this year, according to a slide presentation reviewed by The New York Times

    […]

    [Page and Brin] reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.

    […]

    Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.

    […]

    Google executives hope to reassert their company’s status as a pioneer of A.I.

    While many of the rumored products don’t sound particularly compelling to me, Google does indeed seem serious about this bet. Although they recently laid off more than 12,000 employees, almost none of those employees were working in their AI division.

    I have no doubt that Google has all of the talent and resources required to become a leader in this space. The mystery is why they have been moving so slowly. Whether that is because of safety concerns, unclear monetization, or something else entirely is a question Google will need to sort out.

  • § I have tried, a couple of times, to play the Last of Us video game but I always bounce off of the video game-y aspects. It’s frustrating, not just because the game sequences can be difficult. If that was the case I wouldn’t feed bad putting the game down. The reason I find it frustrating is because I am actually interested to see how the story resolves and I find the game can be a barrier to that, at times.

    Well, the first episode of the new Last of Us television show was just released last Sunday. I thought it was… pretty good? It is odd because, in a way, I almost think I would like it more if I had never played the game. But maybe that is for the best. I would prefer the series stand on its own rather than rely on any prior knowledge of the game. Overall, I am excited to see more episodes! This is going to be an interesting test to see how integral the interactive aspects of gameplay are to effective storytelling.


    § Following up on the citrus talk last week I tried an oroblanco which is a cross between a pomelo and a grapefruit. The one I bought had a giant pith so, although the fruit itself was the size of a large grapefruit, the actual edible portion was equivalent to an orange.

    In terms of taste I thought it was almost identical to grapefruit. I would rank them all:
    pomelo > grapefruit > orange > oroblanco

    I also tried candying the peel which was pretty good although quite bitter. I wish I had thought to try it when I had the pomelo last week too.


    § Caroline and I took advantage of the long weekend and the warm-ish weather by spending a lot of time exploring the nearby parks, including the only national park in Ohio. I have also been bringing around my long neglected Fujifilm X100f camera. I always seem to forget how drastically better the images are from that camera compared to my phone.


    § Links

    § Recipes

    • Roasted butternut squash & brussels sprouts with honey-herb dressing
      • I’m a huge fan of roasted brussels sprouts and this was probably my favorite recipe for them yet. There was also the super interesting step of adding baking soda to the vegetables which was entirely new to me: “The baking soda acts as a catalyst and accelerates both caramelization and the Maillard reaction, while also softening the pectin in the squash for a softer, creamier interior.”
    • Dal Makhani
      • Another one of my favorite Indian recipes. This naan recipe was a pretty good addition — even after substituting for gluten free flour.
  • LangChain is an open source project designed to provide interoperability between large language models and external programs.

    From the project’s documentation:

    Large language models (LLMs) are emerging as a transformative technology… But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge… [LangChain] is aimed at assisting in the development of those types of applications.

    This looks like a super interesting project. I’ve talked before about how great it would be to combine ChatGPT with Wolfram Alpha. Well, that seems to be possible with LangChain. This Google Collab notebook and this HuggingFace project both appear to be examples of just that.

  • James Vincent at The Verge writes:

    In a glossy new video, [Boston Dynamics] has shown off its prototype Atlas robot tossing planks and tool bags around in a fake construction site.

    […]

    “We’re not just thinking about how to make the robot move dynamically through its environment, like we did in Parkour and Dance,” said Kuindersma. “Now, we’re starting to put Atlas to work and think about how the robot should be able to perceive and manipulate objects in its environment.”

    […]

    It’s a notable change in messaging from the Hyundai-owned company, which has never previously emphasized how its bipedal machines could be used in the workplace.

    In an announcement on the Boston Dynamics blog Calvin Hennick writes:

    While some Boston Dynamics robots, such as Spot and Stretch, are commercially available, Atlas is purely a research platform. The Atlas team focuses on pushing the forefront of what’s possible. The leaps and bounds forward in Atlas’ R&D can help improve the hardware and software of these other robots, while also advancing toward a “go anywhere, do anything” robot—capable of performing essentially all the same physical tasks as a person.

    James Vincent again:

    As ever, when parsing marketing materials from companies like Boston Dynamics, it’s important to notice what the company doesn’t say, as well as what it does. In this case, Boston Dynamics hasn’t announced a new product, it’s not saying it’s going to start selling Atlas, and it’s not making predictions about when its bipedal robots might work in factories. For now, we’re just getting something fun to watch. But that’s how Spot started, too.

    Do watch their YouTube video. It is, as with pretty much all of Boston Dynamics demonstrations, both super impressive and a little frightening. However, if you ever find yourself getting concerned about an imminent robot uprising you might find some solace in the fact that the robots would surely go after Boston Dynamics employees first.

  • Riley Goodside and Spencer Papay write:

    Anthropic, an AI startup co-founded by former employees of OpenAI, has quietly begun testing a new, ChatGPT-like AI assistant named Claude.

    […]

    Anthropic’s research paper on Constitutional AI describes AnthropicLM v4-s3, a 52-billion-parameter, pre-trained model… Anthropic tells us that Claude is a new, larger model with architectural choices similar to those in the published research.

    For context, GPT-3 has 175 billion parameters.

    Claude can recall information across 8,000 tokens, more than any publicly known OpenAI model, though this ability was not reliable in our tests.

    This is, effectively, how much “short-term memory” an AI model has. You definitely don’t want any information to be pushed out of memory during a normal chat session. Ideally, an AI model would remember information across multiple chat sessions although neither GPT-3 nor Claude have this ability at this time.

    Later in the article, the authors preform some comparisons between Claude and ChatGPT (GPT-3.5). Here are the big takeaways:

    • Both models are bad at math but Claude, at least occasionally, recognizes this fact and refuses to answer math problems when asked.
    • ChatGPT is quite good at code generation. The code Claude generates contains significantly more errors.
    • Both models appear to be broadly equivalent at logical reasoning tasks.
    • Both models are good at text summarization.

    The article concludes:

    Overall, Claude is a serious competitor to ChatGPT, with improvements in many areas. While conceived as a demonstration of “constitutional” principles, Claude feels not only safer but more fun than ChatGPT.

    And this is all with a model with somewhere around one third of the parameters of GPT-3? I have a feeling this is going to be an exciting year for LLM developments.

  • Kalley Huang, writing for The New York Times:

    It is now not enough for an essay to have just a thesis, introduction, supporting paragraphs and a conclusion.

    “We need to up our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we usually deem an A paper needs to be trickling down into the B-range papers.”

    […]

    Other universities are trying to draw boundaries for A.I. Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies so their plagiarism definitions include generative A.I.

    Maybe a future for essay writing looks more like:

    1. Craft an effective prompt for a given assignment.
    2. Read and fact check the initial output. Revise your prompt and return to step one as necessary.
    3. Taking into account things learned during the fact-checking process, revise and rewrite the output from step two. Cite external sources to support your claims.
    4. If your essay still fails an “AI detector” screening that means you have not revised it enough. Return to step three. If your essay contains factual inaccuracies or uncited claims, return to step three.

    Yes, this still assumes there will be reliable “AI detector” services. Yes, there will still be a cat and mouse game where students find ways to trick the AI detection systems. I don’t think that is really something you can avoid. So, sure, update your academic integrity policy accordingly. Ultimately, though, I think you need to start from the assumption that generative AI will be an ongoing presence in the classroom. From there, encourage a classroom culture that embraces AI as an imperfect, but increasingly important, tool.


    Previously:

  • Cedric Chin writes about the development of the original iPhone’s keyboard:

    Nobody on the 15-engineer team quite knew what the ideal software keyboard would look like. Over the next few weeks, the engineers developed a wide variety of prototypes. One developed a Morse-code-inspired keyboard which would have the user combine taps and slides to mimic dots and dashes. Another developed a piano-like keyboard where users would need to click multiple keys at once (hence the name) to type a specific letter. The remaining prototypes downsized the usual QWERTY keyboard, but these came with their own set of problems. The buttons were too small and there was no tactile feedback to tell the user whether they had hit or missed the button.

    This is a great illustration of how the most obvious solution, in hindsight, is often not at all clear in the moment.

  • For whatever reason, I have never had the brain for mold making; any kind of intuitive understanding of the process alludes me. When to use a two part mold, what objects are even suitable for casting, etc. Despite all of this, I periodically get the itch to try it again which is exactly what I did this weekend.

    I ordered some jesmonite, an interesting cross between plaster and resin that is really difficult to find in the United States despite being quite popular in the U.K, and decided to try casting a sycamore tree seed and two decorative gourds I grew last summer.

    I was completely unable to remove the sycamore seed from the silicone mold. It was probably too rough and porous. Next time I’ll try using some sort of mold release.

    The two gourds came out great though! Afterwards, I tried painting them with watercolors which worked much better than I was expecting it to.

  • § No work next Monday for Martin Luther King Day and then a “work from home” faculty work day on Tuesday. Great. On Wednesday, students will rotate classes for their third quarter which means I’ll be teaching a group of kids I haven’t seen in eight-ish weeks. I expect it will be a nice change of pace.


    § I started watching Three Pines which honestly hasn’t hooked me yet and mostly had the effect of making me want to re-watch Twin Peaks.

    I also saw The Devil’s Hour. I thought it was pretty good and I was super happy to see that it’s a limited series. It turns out, stories are often better when they have a pre-planned beginning, middle, and end. Perhaps the accelerated rate that streaming services are canceling show renewals will encourage this trend to continue.

    Finally, I saw Pearl, Ti West’s prequel to X. I thought it had a fantastic atmosphere. The music was great, the set design had a fascinating quality of period authenticity while at the same time being unsettlingly plastic, even the colors were interesting in a way I can’t exactly place.


    § I swear, at some point in the past ten years autumn disappeared. The Midwest seemingly now transitions from 85 °F to 35 °F overnight. The season must have been more distinct before; whenever asked I would always list it as my favorite! Anyway, there were a few days in the 50s this week which was nice. Although, on balance, we also had like four inches of snow on Friday.

    I’ve noticed that the days getting longer is giving me an unexpected optimism. I am already starting to think about which vegetables I would like to try growing in the spring.


    § Links

    • Analog chess
      • “This is a version of chess where the pieces are not constrained to an 8x8 grid, and instead can move to any position on the board.”
      • See also: Really Bad Chess
    • Giffusion — Create GIFs using Stable Diffusion
      • I tried it out on Google Collab. It was a bunch of fun but the results weren’t especially impressive. I am still super excited for a true generative model for animation.
    • GLM-130B is an open source large language model. However, you should proceed with caution.
    • Q&A against documentation with GPT3 + OpenAI embeddings
      • A method of prompt engineering to easily “fine tune” GPT3 on your own data

    § Recipes

    • Gamja-tang — Korean pork and potato stew
      • Hmm… this recipe was good but 1) it tastes surprisingly similar to Kapusniak while 2) requiring a significantly more involved process to cook. I will probably make it again sometime though!
    • Chana masala
      • One of my favorites. Plus this used a bunch of frozen tomatoes from the garden, freeing up space in the freezer.
    • Not a recipe but I ate a pomelo — the largest citrus fruit — for the first time. I am tempted to say that I think it might be better than grapefruit. Much less bitter and possibly slightly sweater.
  • Eleven Labs recently shared a demo of their new voice synthesis AI. It is worth listening to the audio samples. While I don’t think they are significantly better than the recent demo released by Apple, it is for precisely that reason that I think this is so noteworthy — the fact that small companies are able to build comparable offerings to the industry’s largest players is impressive.

    Also, I have to admit, their Steve Jobs voice simulation demo is impressive.

    Finally, as time goes on I am increasingly unable to understand why none of these recent advancements have trickled down into voice assistants. Why not hook up a speech recognition AI to GPT and then speak the result using one of these voice generation AIs? It must be inference cost, right? Otherwise, I must be missing something.

    Microsoft and OpenAI together could use Whisper, to ChatGPT, to VALL-E and dub it Cortana 2.0. Or put it in a smart speaker and instantly blow Amazon Alexa, Apple Homepod, Google Home offerings out of the water. And that is just using projects OpenAI and Microsoft released publicly!

  • I wrote in December about how ChatGPT could be improved by routing relevant questions to Wolfram Alpha — i.e. neuro-symbolic AI. It sounds like Stephen Wolfram has similar thoughts:

    There’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either.

    […]

    ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”. 

    […]

    Inside Wolfram|Alpha, everything is being turned into computational language, and into precise Wolfram Language code, that at some level has to be “perfect” to be reliably useful. But the crucial point is that ChatGPT doesn’t have to generate this. It can produce its usual natural language, and then Wolfram|Alpha can use its natural language understanding capabilities to translate that natural language into precise Wolfram Language.

    These are exactly the types of informal integrations I expect to see in spades once we finally get a viable open source alternative to GPT.

  • Semafor:

    Microsoft has been in talks to invest $10 billion into the owner of ChatGPT… The funding, which would also include other venture firms, would value OpenAI… at $29 billion, including the new investment

    Gary Marcus:

    Whether you think $29 billion is a sensible valuation for OpenAI depends a lot of what you think of their future… On [one] hand, being valued at $29 billion dollars is really a lot for an AI company, historically speaking, on the other Altman often publicly hints that the company is close to AGI

    How much would AGI actually be worth? A few years back, PwC estimated that the overall AI market might be worth over $15 Trillion/year by the year 2030; McKinsey published a similar study, coming at at $13 trillion/year… If you really were close to being first to AGI, wouldn’t you want to stick around and take a big slice of that, with as much control as possible? My best guess? Altman doesn’t really know how to make OpenAI into the juggernaut that everybody else seems to think he’s got.

    Finally, Marcus shares some interesting information he received from an anonymous source:

    Turns out Semafor was wrong about the deal terms. If things get really really good OpenAI gets back control; I am told by a source who has seen the documents “Once $92 billion in profit plus $13 billion in initial investment are repaid [to Microsoft] and once the other venture investors earn $150 billion, all of the equity reverts back to OpenAI.” In that light, Altman’s play seems more like a hedge than a firesale; some cash now, a lot later if they are hugely successful.

    It is important to remember that OpenAI isn’t exactly a for-profit company but, instead, a “capped profit” company. From their press release announcing the new corporate structure:

    The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission… But any returns beyond that amount… are owned by the original OpenAI Nonprofit entity.

    OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.

    Although at the end of the day OpenAI can always change course. From The Information:

    OpenAI has proposed a key concession as part of discussions with potential new investors. Instead of putting a hard cap on the profit sharing—essentially their return on investment—it could increase the cap 20% per year starting around 2025, said a person briefed on the change. Investors say this compromise, if it goes through, would make the deal more attractive because it would allow shareholders to obtain venture-level returns if the company becomes a moneymaker.

  • LAION-AI, the non-profit organization that created the original dataset behind Stable Diffusion, launched Open Assistant last week. From the project’s GitHub page:

    Open Assistant is a project meant to give everyone access to a great chat based large language model… In the same way that stable-diffusion helped the world make art and images in new ways we hope Open Assistant can help improve the world by improving language itself.

    Remember, LAION is not the company behind Stable Diffusion (that would be Stability AI), they just produced the training dataset. We have yet to see if they can build a successful product. They have genuinely exciting plans though!

    We are not going to stop at replicating ChatGPT. We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information, and much more, with the ability to be personalized and extended by anyone. And we want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware.

    Whether or not LAION is able to accomplish their goals, I am optimistic that we will see serious developments in the open source large language model space this year.

  • John Naughton, writing for The Guardian:

    [ChatGPT] reminds me, oddly enough, of spreadsheet software, which struck the business world like a thunderbolt in 1979 when Dan Bricklin and Bob Frankston wrote VisiCalc, the first spreadsheet program, for the Apple II computer

    Eventually, Microsoft wrote its own version and called it Excel, which now runs on every machine in every office in the developed world. It went from being an intriguing but useful augmentation of human capabilities to being a mundane accessory

    Digital spreadsheets are perhaps the best example of a computational tool successfully augmenting the day-to-day work of a huge number of people. Spreadsheets have gone from nonexistent to simultaneously indispensable and mundane unbelievably quickly. If a similar augmentation occurs for prose it will be an equally, if not more, transformative development.

  • Jeff Kaufman wrote a fascinating piece arguing that nearly all online advertisement is probably illegal under GDPR as it currently stands:

    I think the online ads ecosystem is most likely illegal in Europe, and as more decisions come out it will become clear that it can’t be reworked to be within the bounds of the GDPR.

    The most surprising thing I learned from this article is that apparently it is legally required that cookie consent banners make the process of opting out as easy as opting in. I don’t think I have ever encountered a site where that is the case.

  • § Back to teaching after two weeks of winter vacation. Although, as always, I wish the vacation was longer, it feels nice to start getting back into my normal routines after the crazy holiday season. Worst case scenario: ten weeks until spring break, twenty-one until summer.


    § I have been listening to the album Distance by the band Erasers a lot after discovering it on James Reeves' list of favorite albums of 2022. Overall, the list is full of great minimal electronic artists that are all new to me. It is going to make the perfect soundtrack for some gray winter days ahead.


    § Longmont Potion Castle 20 was released on Friday. The tracks I have had the opportunity to listen to so far are amazing, as usual.


    § Three of the quails escaped into the garage which made for a real Yakety Sax evening as Caroline and I ran around trying to catch them in makeshift nets.


    § Links

    § Recipes

    Getting back into my work schedule this week meant much less cooking at home. I did at least get the opportunity to make one new-to-me recipe — arroz con pollo.

    Recipe discovery is difficult. I would love to find a personal cooking blog that is not full of SEO spam.

    • Cajun sausage and rice skillet
      • An old classic. I had to use some kielbasa that was left over from Kapusniak last week. Easy and quick to make and goes great with cornbread.
    • Arroz con pollo
      • This was good but not quite as good as my favorite Spanish rice recipe. I will definitely incorporate some elements from that recipe if I make this one again. A big positive is that I now have a huge quantity of very versatile leftovers.
  • Ann Gibbons, writing for Science.org:

    Ask medieval historian Michael McCormick what year was the worst to be alive, and he’s got an answer: “536.”

    A mysterious fog plunged Europe, the Middle East, and parts of Asia into darkness, day and night—for 18 months… initiating the coldest decade in the past 2300 years. Snow fell that summer in China; crops failed; people starved.

    Now, an ultraprecise analysis of ice from a Swiss glacier by a team led by McCormick and glaciologist Paul Mayewski… reported that a cataclysmic volcanic eruption in Iceland spewed ash across the Northern Hemisphere early in 536. Two other massive eruptions followed, in 540 and 547.

    The team deciphered this record using a new ultra–high-resolution method, in which a laser carves 120-micron slivers of ice, representing just a few days or weeks of snowfall, along the length of the core… The approach enabled the team to pinpoint storms, volcanic eruptions, and lead pollution down to the month or even less, going back 2000 years

    120 microns is roughly the diameter of a single grain of table salt.

  • Apple is introducing automatic narration of select books in their library. I expect this to eventually be an automatic addition to every relevant book on their service although at the moment it appears to require a fair amount of manual review. Notice the “one to two month” lead time.

    From Apple.com:

    Apple Books digital narration brings together advanced speech synthesis technology with important work by teams of linguists, quality control specialists, and audio engineers to produce high-quality audiobooks from an ebook file.

    Our digital voices are created and optimized for specific genres. We’re starting with fiction and romance, and are accepting ebook submissions in these genres.

    Once your request is submitted, it takes one to two months to process the book and conduct quality checks. If the digitally narrated audiobook meets our quality and content standards, your audiobook will be ready to publish on the store.

    The voice samples at the link above are really impressive. I hope Apple brings these speech synthesis improvements to other parts of their ecosystem. Safari’s built-in text-to-speech feature is shockingly bad in comparison.

  • Dina Bass, reporting for Bloomberg:

    Microsoft Corp. is preparing to add OpenAI’s ChatGPT chatbot to its Bing search engine in a bid to lure users from rival Google, according to a person familiar with the plans.

    Microsoft is betting that the more conversational and contextual replies to users’ queries will win over search users by supplying better-quality answers beyond links

    The Redmond, Washington-based company may roll out the additional feature in the next several months, but it is still weighing both the chatbot’s accuracy and how quickly it can be included in the search engine

    Whether or not this succeeds will be determined by the UI decisions Microsoft makes here. I think the best idea, particularly when introducing this as a new interface element, is to frame the AI as an extension of the existing “instant answers” box. Allow the user to ask the AI clarifying questions in the context of their search. Leave the standard search results as they are. Don’t touch anything else. Below is a quick mockup of the UI I am imagining.

    Although I am not completely convinced that this will be an overall improvement for web search as a tool I am excited to see how other players respond — especially Google. We may finally start seeing some innovation and experimentation again.

  • Take a moment to consider the following questions before you click:

    If you were tasked with designing a building in one of the coldest places in the world what are the factors you should consider? Ice buildup, insulation, frozen pipes… there are a lot! Even if you limit yourself to just the doors. Which direction should they open? How about the door handles? You better make sure nothing freezes shut!

    The anonymous writer behind the brr.fyi blog shares their observations from Antartica:

    One of the most underrated and fascinating parts of McMurdo is its patchwork evolution over the decades. This is not a master-planned community. Rather, it is a series of organic responses to evolving operational needs.

    Nothing more clearly illustrates this than the doors to the buildings. I thought I’d share a collection of my favorite doors, to give a sense of what it’s like on a day-to-day basis doing the most basic task around town: entering and exiting buildings.

  • Petals is an open source project that allows you to run large language models on standard consumer hardware using distributed computing “BitTorrent-style”. From the GitHub repository:

    Petals runs large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.

    In the past I have written about how locally run, open source, large language models will open the door to exciting new projects. This seems like an interesting alternative while we wait for optimizations that would make running these models fully on-device less resource intensive.

  • Pepys' diary is a website, newsletter, and RSS feed that publishes, in real time, diary entries from 17th century civil servant Samuel Pepys' diary. The diary contains first-hand accounts of the Restoration, the Great Plague, and the Great Fire of London as they occur.

    Here is a taste of what to expect. With the Fire of London raging Pepys must think fast to save his parmesan cheese. From the September 4th, 1666 entry:

    …the fire coming on in that narrow streete, on both sides, with infinite fury. Sir W. Batten not knowing how to remove his wine, did dig a pit in the garden, and laid it in there; …And in the evening Sir W. Pen and I did dig another, and put our wine in it; and I my Parmazan cheese, as well as my wine and some other things.

    The current reading just began on January 1st and will conclude in a decade with the final entry published on May 31st, 2033.

  • Happy new year!


    § The Pocket Operator has become my newest obsession. I had forgotten how much I enjoy experimenting with little musical toys. Begin the countdown to when I finally give up and buy an OP-1.


    § I got on a serious weird movie kick this week after watching Triangle of Sadness after Alex Cox and Merlin Mann mentioned it on a recent episode of their Do By Friday podcast.

    Triangle of Sadness was alright but I thought The Square, also by Ruben Östlund, was amazing. Although I will admit that some of my enjoyment could be a consequence of going to art school and seeing a lot of hilariously embarrassing aspects of myself in many of the characters.

    After watching the two Östlund movies I inevitably had to see The Lobster, a movie I had been avoiding since seeing The Killing of a Sacred Deer a while back and not enjoying it much at all. I ended up loving The Lobster! It might be that Lanthimos writes such amazingly strange, surreal, uncomfortable dialog that I find it all too disturbing in a horror movie but hilarious in a comedy.


    § Re Triangle of Sadness: this song has been stuck in my heard since seeing the movie.


    § There was an unbelievable pink sunset on Wednesday evening that I was actually able to capture a nice photo of. This is usually the type of situation that I find Apple’s computational photography engine “corrects” for, making it really a difficult subject to photograph.

    § Links

    § Recipes

    The unintentional theme this week was cabbage which is definitely a new favorite vegetable — kimchi, sauerkraut, what’s not to like?

    • Kapusniak, Polish kielbasa and cabbage soup
      • This was one of the best meals I have made in a really long time. Highly recommended. The only thing I will do differently next time is add a liiiitle more chicken broth to thin it out slightly.
      • Also, here is a video of Kenji making this recipe
      • JANUARY 8 UPDATE: I just realized that when I first made this recipe and wrote the above I had accidentally used half of the specified amount of broth — 4 cups instead of 8 cups — which explains a lot! So now I would suggest either using somewhere around 6 cups of broth or letting the whole soup boil down and condense for a while. Still a great recipe.
    • Cabbage Rolls
      • This turned out better than I expected but also was more of a pain than it was worth.
    • Kimchi soup
      • I like spicy foods but this was too spicy for me. It could be the chili flakes I used though. Next time I will either use less chili flakes or a different brand.
    • Thai-style beef with basil and chiles
      • Not too special but pretty good! I made this one to have with the kimchi soup and it was a good sidekick. Caroline really liked it though.
  • John Naughton writes:

    2023 looks like being more like 1993 than any other year in recent history. In Spring of that year Marc Andreessen and Eric Bina released Mosaic, the first modern Web browser and suddenly the non-technical world understood what this strange ‘Internet’ thing was for.

    We’ve now reached a similar inflection point with something called ‘AI’

    The first killer-app of Generative AI has just arrived in the form of ChatGPT… It’s become wildly popular almost overnight — going from zero to a million users in five days. Why? Because everyone can intuitively get that it can do something that they feel is useful but personally find difficult to do themselves. Which means that — finally — they understand what this ‘AI’ thing is for.

subscribe via RSS