If you were to ask me who I'd like to be when I grow up, Kean Walmsley would be high on my list.
Kean has crafted a lifestyle that prioritizes fun, freedom, flexibility, and family, leaving room for traveling and working around the world, blogging, teaching, sports, research, and more.
Please enjoy this episode, its transcript, and its show notes.
If you are wondering where the audio files of your Apple Voice Memos are, in case you want to browse through them, see their file sizes, and copy or remove them, they are located in the
com.apple.voicememos folder inside of
That is, the
Library folder inside of your macOS username. For instance, if your username was
john, this would be the full path.
Hope that helps!
Many Mondays, I find myself empty-handed—exactly as I did yesterday—browsing through my journals in search of a story I could share today.
Back when I started in July 2019, I committed to post a short story every Tuesday, both in English and Spanish, to my sketches newsletter.
I keep getting surprised by the amount of words I've written and the amount of things I've drawn over the past year.
My hope is that I'll find the time to write more "deeply," preparing posts and sketches in advance and having more time to mull over my own thoughts and ideas.
But hey, here it is.
I have no real reason to keep going other than an agreement with myself, and the intention to keep improving my sketching, writing, and storytelling skills.
Yesterday, I shared last week's sketch on Hacker News.
sktrdie asked, What am I looking at?
Art, I think.
And I also think that's what all of this is about in the end: an art project.
Last week's text was short (maybe lazy). My intention was generate a feeling of incompleteness. To leave room for interpretation.
In John Maeda's words, Perhaps this is the fundamental distinction between pure art and pure design. […] The best art makes your head spin with questions. 1
Yesterday, I made a first live stream to test the waters while editing Kean Walmsley's episode of the Getting Simple podcast, what will be May 2020's monthly episode.
I was using OBS on a MacBook Pro to stream, via Ethernet, my webcam video and a 4K display at 2560x1440 at thirty frames per second at 9000 kbps, recording locally at the same resolution and fps at 25000 kbps, and then using Adobe Audition to edit an individual audio file.
While YouTube considered my stream was "healthy," the problem was, I believe, that I was streaming using Apple's Hardware Encoder (which releases a lot of CPU while streaming). This was slowing down every single effect I applied in Audition, making my share-out counterproductive, as all I was trying to do was trying my live streaming setup while editing the podcast.
Long story short, I won't probably be editing the podcast live anymore, at least not with this setup. I might be able to pipe my MacBook Pro's screen through an Elgato HD60S+ video capture device to then stream from a different machine, so the machine that's running Adobe Audition is not the same than the machine that's streaming.
That might complicate the setup but might allow for this sort of streaming. For other coding tutorials, a single machine should work fine.
If that's your thing, tune in on YouTube (@nonomartinezalonso) to know when I go live next (and make sure to turn on all channel notifications to be notified).
And you gave us ads and all sorts of unsolicited connections.
This sparked a smile on my face.
I don't even remember when I paid for iA Writer (both desktop and mobile) yet I keep getting awesome updates in a consistent bases for the Information Architects team.
Their latest update — iA Writer 5.5 — showed up yesterday on my machines with PDF previews (!) that update in real time, which lets me skip one step on my process, which is often exporting, opening a PDF in Preview, and then iterate through the changes I want to make. The preview respects the page size you've setup for printing as well as the title page, headers, footers, and page numbering.
iA Writer 5.5 for Mac and iOS has arrived. The update adds a powerful mix of functionality and delicate subtlety that will improve your writing workflow.
Congratulations to the team, really. And thanks so much for making my writing experience such a joy.
I recently got Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron as a recomendation from Keith.
This second version updates all code samples to work with TensorFlow 2, and the repository that accompanies the book—ageron/handson-ml2—is also updated frequently to catch up with the latest updates.
Just the Python notebooks on that GitHub repository are super helpful to get an overall on state-of-the-art machine learning and deep learning techniques, from the basics of machine learning and classic techniques like classification, support vector machines, or decision trees to the latest techniques to code neural networks, customizing and trained them, loading and pre-processing data, natural language processing, computer vision, autoencoders and gans, or reinforcement learning.
Our deep neural network Graph2Plan is a learning framework for automated floorplan generation from layout graphs. The trained network can generate floorplans based on an input building boundary only (a-b), like in previous works. In addition, we allow users to add a variety of constraints such as room counts (c), room connectivity (d), and other layout graph edits. Multiple generated floorplans which fulfill the input constraints are shown.
I'm in the midst of reading The Laws of Simplicity by John Maeda. 1
I love the tone of the book—sharp, on point, but also personal, funny, and entertaining—and the way he invites the reader, I welcome you to this creative experience.
He made it, exactly, 100 pages.
I wanted to share three out of his ten laws with you today.
Law 3. Time. Savings in time feel like simplicity.
Law 4. Learn. Knowledge makes everything simpler.
Law 7. Emotion. More emotions are better than less.
Their demo video made me look. This seems like a super useful tool for creatives that would let us skip things like cropping, editing, sharing via Email or Airdrop, and much more. Seamless. As John Maeda would say, "Savings in time feel like simplicity."
Finally a practical use of AR. —The Verge
They have a form to request early access.
Here's an episode in memory of Patrick Winston which opens the new Sketches series with a short piece on story understanding with artificial intelligence and my experience attending Winston's 6.034 lectures at MIT. "Don't just tell me it's a school bus. Tell me why you think it's a school bus."
I've sketched for the last 365 days. A year ago I decided not only to sketch daily but to write short stories and publish them online every Tuesday. The first story went out on July 2, 2019. And today is the first time I'm telling you one of those stories in a podcast, with my voice.
Please enjoy this episode, its transcript, and its show notes.
We propose In-Domain GAN inversion (IDInvert) by first training a novel domain-guided encoder which is able to produce in-domain latent code, and then performing domain-regularized optimization which involves the encoder as a regularizer to land the code inside the latent space when being finetuned. The in-domain codes produced by IDInvert enable high-quality real image editing with fixed GAN models.
With this open time
You do not have to write the next bestselling novel
You do not have to get in the best shape of your life
You do not have to start that podcast
What you can do instead is observe this pause as an opportunity
The same systems we see crumbling in society
Are being called to crumble in each of us individually
The systems that taught us we are machines
That live to produce & we are disposable if we are not doing so
The systems that taught us monetary gain takes priority over humanity
The systems that create our insecurities then capitalize off of them
What if we became curious with this free time, & had no agenda other than to experience being?
What if you created art for the sake of creating?
What if you allowed yourself to rest & cry & laugh & play & get curious about whatever arises in you?
What if our true purpose is in this space?
As if mother earth is saying: we can no longer carry on this way,
The time is now - I am reminding you who you are.
Will you remember?
Via Ana García Puyol.
Connect directly to RunwayML models with only a few lines of code to build web apps, chatbots, plugins, and more. Hosted Models live on the web and can be used anytime, anywhere, without requiring RunwayML to be open!
To write non-fiction, you want to know as much as you can about a given subject.
Your knowledge might come from different sources—even your own memory—but relying on memory can be dangerous.
Memories are temporarily stored in the hippocampus (the part of your brain that acts as a daily memory cache) and only transferred into a long-term storage device (the neocortex) after a good night's sleep. In fact, the less quality sleep you get the harder it is to retain your memories in old age. 1
Often, my memories of certain events are limited to what's written in the page, and I repeatedly wish I had added just a bit more detail.
That's why I prefer to write daily.
I want to know more.
As I forget more and more details of those future-proofed memories, each of my written words gains value.
Today is a new opportunity to add more depth.
How are you feeling?
What's your plan for the day?
Where are you writing from?
What pen (or keyboard) are you using to write?
What are you wearing?
What worries you?
How did you sleep today?
We’ve officially launched a new podcast API for developers. In layman's terms, this means third-party developers can now build powerful new experiences for audiences, leveraging all of Spotify’s public, podcast-related data. What does that look like in practice? Maybe it’s an app that recommends episodes to a listener based on what their network is into, or a calendar integration notifying fans when a new episode is available. With data from over a million podcasts and counting, the possibilities are endless.
If you want to dive into the technical nitty gritty, then jump on over to our Spotify for Developers blog to learn more about how you can start exploring. And if you want to learn more about what this means for you and your show, we’re here to walk you through it.
The first bite to one of the dried apricots I bought at Market Basket teleported me back to the mountain bike trips my cousins and I would do with my father, early on Sunday morning.
Ito, Nacho, and myself—and Dad leading the way—would go through various routes in Torre de Benagalbón, often using Santillán Stream as our starting point. The beginning was always familiar: we'd leave home and reach the river mouth within minutes, biking through "El Chalet" (what used to be the summer house of my uncle's family), passing through a small bridge below road N-340, and leaving the nuns' school behind.
In our childhood, it wasn't long until the landscape turned into a wild route. We'd only spot little farmer settlements and other informal constructions along our way.
Today, a big chunk of land has been built on. The route has become a small stream, often dry, along a set of housing units built over the past twenty years.
Continuing with our journey, we'd bike along Añoreta's golf course (where my dad plays religiously every week1) and pass below the A-7 highway bridge. When biking through this area, we'll be on the look for golf balls. We knew locals would have done their round in the early morning, but balls were constantly being kicked out of the course and we'd always collect a few.
Our destination changed every weekend and we'd end up in different places, often making a stop and sitting on the floor to eat a sandwich.2 It was my father who'd lead the way and decide which tracks to follow. I've never known how he'd manage to orient himself to reach all of those places. I guess you don't think about it when it's on someone else's plate to decide.
Wherever it is that we went, those dried apricots (which we call orejones back home) were a constant. Both their taste and smell bring back memories of our bike trips across the streams of Torre de Benagalbón.
Dad loves them.
I managed to make this work by unlinking
Then reinstalling python.
brew reinstall python@2
I was having this issue when trying to install Google Cloud SDK. After doing the previous steps, I could run the installer without a problem.
A site I discovered on ProductHunt with "activities to keep kids busy while they're stuck at home."
Early morning on December 20, 2016, I found my way into a huge sports field at MIT, plagued with evenly-spaced tables ready for an exam. Nervous, as if I were back to school, I was the first one to get there. Our professor would get there a bit later—that was Patrick Henry Winston.1
Three months earlier, on September 7, 2016, I would attend what was the first of a series of lectures of Winston's introductory course to artificial intelligence—6.034—and would sit in the first row of Huntington Hall2, room 10-250, colloquially known as "Ten Two Fifty," located right below the Great Dome3 of MIT.
Some days, I'd arrive early and get a chance to talk to Patrick for a bit before class. What's the most dangerous power tool you've ever used? He asked me one day. Silence. I didn't know what to answer. I thought an architect would have used power tools. He followed. I was pleased to see he knew my name just a couple weeks into the course. In retrospect, I find most of my "tools" these days being virtual pieces of software.
In one of those classes—as if it were a line from Jonathan Nolan and Lisa Joy's Westworld series (2016)—Winston emphasized the relevance of the following question: Can you explain why you think so?
To Winston, whether a machine is able to answer questions of the type of why and how it reached a conclusion in a humanlike way was as important, or even more, as the conclusion or the answer itself.
"Genesis supports steps toward story understanding," reads the headline of his draft paper with Dylan Holmes, titled The Genesis Manifesto: Story Understanding and Human Intelligence4 as of December 13, 2016, barely ten days after the release of HBO's series first seasons' finale. "To understand what makes humans uniquely intelligent, we build computational models of how humans tell and understand stories."5
A system like Genesis is meant to be on top of all other technologies and make the system self-conscious. Genesis can understand stories, answer questions, and—unlike other narrow artificial intelligence systems6—reason and explain why it reaches its conclusions.
Winston shared a fascinating (yet worrying) idea in class. If you don't know how a program gets to a conclusion, you can't trust it. It's not possible to debug it. As a matter of fact, we rarely know how machines work, but we still give away our trust for their convenience.
Three years ago, on April 20, 2017, I met with Patrick to ask for his feedback on the project I was working on at the time—Suggestive Drawing7. He tested one of my first working prototypes, a drawing app running on an iPad with an Apple Pencil.
Patrick sketched these two flowers.
Patrick Henry Winston's free-hand flower sketches. Timestamped at April 20, 2017, 15:28.
A few seconds later, the system returned a prediction for each of them using a generative machine learning model that only knew about daisies.
Pix2Pix predictions using Patrick's flower sketches as input with a model trained to learn a mapping from line sketches of flowers to daisy flower photo textures.
Output processed with an alpha mask.
That's pretty cool! Patrick said. We discussed the project for half an hour and I left his office at Stata Center.
That was the last time I saw him.
Patrick passed away on July 20, 2019. His memorial8, held in October 2019, surfaces the fact that Patrick influenced many people's life in profound positive ways. Not only as a teacher or a mentor, but as someone who loved sharing the experience he acquired over years of teaching.
I never had a chance to interview Winston for the podcast, but I'd have loved hearing more about his worldview. Luckily, he contributed a great amount with numerous online lectures, talks, and other learning resources.
There's a sentence that Patrick said that will stick with me for the rest of my life.
Stories are the answer.
Patrick Henry Winston (1943-2019) was the Ford Professor of Artificial Intelligence and Computer Science at the Massachusetts Institute of Technology (MIT). I invite you to watch his Hello World, Hello MIT talk (2019) to learn more about his worldview and his contributions, and to Watch his 6.034 lectures online. ↩
The day I sketched this view was the day I met Pier Gustafson for the first time. He showed up biking across Killian Court, right in front of the building that was named after MIT's 10th president, James Rhyne Killian Jr. I often passed through this location when running along the Charles River. I had this sketch on the back-burner for a while now, and by chance I decided to prepare it for this Tuesday, exactly three years after the last time I met with Patrick. ↩
Winston, Patrick H. Holmes, Dylan. The Genesis Manifesto: Story Understanding and Human Intelligence. 2017. ↩
Suggestive Drawing Among Human and Artificial Intelligences (May 2017) was my master's thesis at the Harvard Graduate School of Design, in which I explore the role of machine learning in design or, more specifically, in drawing. ↩
On April 29, 2020, Patrick's colleagues, students, friends, and acquaintances were invited to join PHWFest, a gathering to share memories and experiences, an event that has been postponed due to the current COVID-19 situation in Boston. ↩
import polyscope as ps # Initialize polyscope ps.init() ### Register a point cloud # `my_points` is a Nx3 numpy array ps.register_point_cloud("my points", my_points) ### Register a mesh # `verts` is a Nx3 numpy array of vertex positions # `faces` is a Fx3 array of indices, or a nested list ps.register_surface_mesh("my mesh", verts, faces, smooth_shade=True) # Add a scalar function and a vector function defined on the mesh # vertex_scalar is a length V numpy array of values # face_vectors is an Fx3 array of vectors per face ps.get_surface_mesh("my mesh").add_scalar_quantity("my_scalar", vertex_scalar, defined_on='vertices', cmap='blues') ps.get_surface_mesh("my mesh").add_vector_quantity("my_vector", face_vectors, defined_on='faces', color=(0.2, 0.5, 0.5)) # View the point cloud and mesh we just registered in the 3D UI ps.show()
The world is forcefully slowing down. Wherever it is you are, I really hope you and your close ones are staying safe and healthy. For me, this is day thirty secluded at home, and I can't wait to walk on the beach, go for a run, and spend time with family and friends.
If you want to be part of a future episode on how this situation is altering the way we work and live our lives, I'd love to hear from you. Send me a voice message.
In words of Yuval Noah Harari, "we should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes."
Today, I bring you an experimental episode with Scott Mitchell, in which he jumps in time to dissect his own experimentation life philosophy, his efforts to remove creative friction, and his worldview.
I loved to learn about Scott's metaphor of the arena, experiments he's carried out over the past years, and his current solo adventure.
Sign in with Apple provides a fast, private way to sign into apps and websites, giving people a consistent experience they can trust and the convenience of not having to remember multiple accounts and passwords.
Back in August 2018, Panagiotis Michalatos and I sat down at the back porch of his house in Cambridge, Massachusetts, to chat for a couple of hours. Pan, as people often call him, is one of the most intelligent persons I've ever met, and I was lucky enough to have him share his idiosyncratic worldview over a microphone with me.1
The way he lives and works, the clothes he wears, and the way he designs or codes, inspired me to think of one word: minimalism.
Minimalism is the reduction of anything to its essential elements, stripping out the superfluous and bringing to light nuances that might otherwise ego unnoticed. The result of that reduction is what we often call simple.
Paradoxically, simplifying any process, artifact, or concept, is complex. Minimalism and simplicity are hard. Our nomad predecessors would clutter a space and, after its use, would move somewhere else, start from scratch, and let nature clean up the mess. But we're stuck in one place.
In our times, minimalism often implies getting rid of possessions and keeping only the things we use and value. Certainly, not something everyone can afford.
As Pan told me on his podcast episode, when you have too little, you want to hold on to anything that comes your way, because you can loose it immediately. "You need to have the luxury to choose to simplify your life."
Here are my highlights from Harari's publication on the Financial Times:
[W]e should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes.
In normal times, governments, businesses and educational boards would never agree to conduct such experiments. But these aren’t normal times.
Hitherto, when your finger touched the screen of your smartphone and clicked on a link, the government wanted to know what exactly your finger was clicking on. But with coronavirus, the focus of interest shifts. Now the government wants to know the temperature of your finger and the blood-pressure under its skin.
In recent years both governments and corporations have been using ever more sophisticated technologies to track, monitor and manipulate people. Yet if we are not careful, the epidemic might nevertheless mark an important watershed in the history of surveillance. Not only because it might normalise the deployment of mass surveillance tools in countries that have so far rejected them, but even more so because it signifies a dramatic transition from “over the skin” to “under the skin” surveillance.
The downside is, of course, that this would give legitimacy to a terrifying new surveillance system. If you know, for example, that I clicked on a Fox News link rather than a CNN link, that can teach you something about my political views and perhaps even my personality. But if you can monitor what happens to my body temperature, blood pressure and heart-rate as I watch the video clip, you can learn what makes me laugh, what makes me cry, and what makes me really, really angry.
When the subject keeps moving, it's hard to capture her face. You try to sketch fast, but sometimes there's too much movement, too quickly, and you can't capture the facial features that make someone be who they are. But you start getting used to it. You can learn the basic facial proportions to "complete the puzzle" when the subject moved (or is gone) and you only got a few elements on the page.
Just yesterday, Google announced a new challenge.
Reconstructing 3D objects and buildings from a series of images is a well-known problem in computer vision, known as Structure-from-Motion (SfM). It has diverse applications in photography and cultural heritage preservation (e.g., allowing people to explore the sculptures of Rapa Nui in a browser) and powers many services across Google Maps, such as the 3D models created from StreetView and aerial imagery. In these examples, images are usually captured by operators under controlled conditions. While this ensures homogeneous data with a uniform, high-quality appearance in the images and the final reconstruction, it also limits the diversity of sites captured and the viewpoints from which they are seen. What if, instead of using images from tightly controlled conditions, one could apply SfM techniques to better capture the richness of the world using the vast amounts of unstructured image collections freely available on the internet?
We hope this benchmark, dataset and challenge helps advance the state of the art in 3D reconstruction with heterogeneous images. If you’re interested in participating in the challenge, please see the 2020 Image Matching Challenge website for more details.