Nono.MA

MARCH 31, 2020

Pretty impressed by ImgIX's JSON format. Long story short, it provides a JSON metadata file for each resource you are serving via ImgIX (say, an image or a video) as in the example below just by appending fm=json to any image request.

For this image.

https://nono.imgix.net/img/u/profile-nono-ma.jpg

You'd load the JSON at.

https://nono.imgix.net/img/u/profile-nono-ma.jpg?fm=json

What's nice is that with the imgix.js JavaScript library you can fetch the JSON file for each picture and—before loading any image data—decide how to deliver the image. In this CodePen provided by ImgIX, they crop a set of images to fit a certain aspect ratio (which ends up saving data as they cropped bits of each image are never loaded) avoiding also having to play with image background CSS tricks in order to fit an image to a given aspect ratio (as the image already has it!).

{
    "Exif": {
        "PixelXDimension": 1500,
        "DateTimeDigitized": "2019:11:20 10:43:28",
        "PixelYDimension": 1500,
        "ColorSpace": 65535
    },
    "Orientation": 1,
    "Output": {},
    "Content-Type": "image\/jpeg",
    "JFIF": {
        "IsProgressive": true
    },
    "DPIWidth": 144,
    "Content-Length": "180595",
    "Depth": 8,
    "ColorModel": "RGB",
    "DPIHeight": 144,
    "TIFF": {
        "ResolutionUnit": 2,
        "DateTime": "2019:11:20 11:17:40",
        "Orientation": 1,
        "Software": "Adobe Photoshop CC 2019 (Macintosh)",
        "YResolution": 144,
        "XResolution": 144
    },
    "PixelWidth": 1500,
    "PixelHeight": 1500,
    "ProfileName": "Display"
}

And here's the JavaScript included in that CodePen that does all the magic.

// Demonstrate the use of the `fm=json` parameter to resize images
// to a certain aspect ratio, using ES6.

let ratio = 16 / 9;
let maxSize = 300;

let placeImages = function () {
    jQuery('.imgix-item').each((i, value) => {
        let $elem = jQuery(value);
        // We pull down the image specific by the 'data-src' attribute
        // of each .imgix-item, but append the "?fm=json" query string to it.
        // This instructs imgix to return the JSON Output Format instead of 
        // a manipulated image.
        let url = new imgix.URL($elem.attr('data-src'), { fm: "json" }).getUrl();

        jQuery.ajax(url).success((data) => {
            let newWidth, newHeight;

            // Next, we compute the new height/width params for 
            // each of our images.
            if (data.PixelHeight > data.PixelWidth) {
                newHeight = maxSize;
                newWidth = Math.ceil(newHeight / ratio);
            } else {
                newWidth = maxSize;
                newHeight = Math.ceil(newWidth / ratio);
            }

            // Now, we apply these to our actual images, setting the 'src'
            // attribute for the first time.
            $elem.get(0).src = new imgix.URL($elem.attr('data-src'), {
                w: newWidth,
                h: newHeight,
                fit: "crop"
            }).getUrl();
        })
    });
}

jQuery(document).ready(placeImages);

MARCH 30, 2020

gs -dNOPAUSE -sDEVICE=pdfwrite \
-sOUTPUTFILE=/output/path/combined.pdf \
-dBATCH /input/path/to/pdfs/*.pdf

MARCH 24, 2020

Here's something I learned about Amazon SageMaker today at work.

You can create notebook instances with different instance types (say, ml.t2.medium or ml.p3.2xlarge) and use a set of kernels that have been setup for you. These are conda (Anaconda) environments exposed as Jupyter notebook kernels that execute the commands you write on the Python notebook.

What I learned today that I didn't know is that you can create your own conda environment and expose them as kernels so you're not limited to run with the kernels offered by Amazon AWS.

This is the sample environment I setup today. These commands should be run on a Terminal window in a SageMaker notebook but they most likely can run on any environment with conda installed.

# Create new conda environment named env_tf210_p36
$ conda create --name env_tf210_p36 python=3.6 tensorflow-gpu=2.1.0 ipykernel tensorflow-datasets matplotlib pillow keras

# Enable conda on bash
$ echo ". /home/ec2-user/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc

# Enter bash (if you're not already running in bash)
$ bash

# Activate your freshly created environment
$ conda activate env_tf210_p36

# Install GitHub dependencies
$ pip install git+https://github.com/tensorflow/examples.git

# Now you have your environment setup - Party!
# ..

# When you're ready to leave
$ conda deactivate

How do we expose our new conda environment as a SageMaker kernel?

# Activate the conda environment (as it has ipykernel installed)
$ conda activate env_tf210_p36

# Expose your conda environment with ipykernel
$ python -m ipykernel install --user --name env_tf210_p36 --display-name "My Env (tf_2.1.0 py_3.6)"

After reloading your notebook instance you should see your custom environment appear in the launcher and in the notebook kernel selector.

What if you don't want to repeat this process over and over and over?

You can create a lifecycle configuration on SageMaker that will run this initial environment creation setup every time you create a new notebook instance. (You create a new Lifecycle Configuration and paste the following code inside of the Create Notebook tab.)


#!/bin/bash

set -e

# OVERVIEW
# This script creates and configures the env_tf210_p36 environment.

sudo -u ec2-user -i <<EOF

echo ". /home/ec2-user/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc

# Create custom conda environment
conda create --name env_tf210_p36 python=3.6 tensorflow-gpu=2.1.0 ipykernel tensorflow-datasets matplotlib pillow keras -y

# Activate our freshly created environment
source /home/ec2-user/anaconda3/bin/activate env_tf210_p36

# Install git-repository dependencies
pip install -q git+https://github.com/tensorflow/examples.git

# Expose environment as kernel
python -m ipykernel install --user --name env_tf210_p36 --display-name My_Env_tf_2.1.0_py_3.6

# Deactivate environment
source /home/ec2-user/anaconda3/bin/deactivate

EOF

That way you won't have to setup each new notebook instance you create. You'll just have to pick the lifecycle you just created. Take a look at Amazon SageMaker notebook instance Lifecycle Configuration samples.

MARCH 20, 2020

Maker.js: Parametric CNC Drawings Using JavaScript

Microsoft recently open sourced, twenty five days ago, Maker.js — a JavaScript library to create drawings on the browser for CNC and laser cutting.

I love the playground site they made to share parametric scripts (say, of a smiley face, a hello, world text, a floor plan, and more). See all demos.

From their website:

  • Drawings are a simple JavaScript object which can be serialized / deserialized conventionally with JSON. This also makes a drawing easy to clone.
  • Other people's Models can be required the Node.s way, modified, and re-exported.
  • Models can be scaled, distorted, measured, and converted to different unit systems.
  • Paths can be distorted.
  • Models can be rotated or mirrored.
  • Find intersection points or intersection angles of paths.
  • Traverse a model tree to reason over its children.
  • Detect chains formed by paths connecting end to end.
  • Get the points along a path or along a chain of paths.
  • Easily add a curvature at the joint between any 2 paths, using a traditional or a dogbone fillet.
  • Combine models with boolean operations to get unions, intersections, or punches.
  • Expand paths to simulate a stroke thickness, with the option to bevel joints.
  • Outline model to create a surrounding outline, with the option to bevel joints.
  • Layout clones into rows, columns, grids, bricks, or honeycombs.

Via @alfarok's GitHub stars.

MARCH 18, 2020

Here are some loose thoughts on what I've been tinkering with for the past months or years.

As of lately, I've been working on Folio—the content-management system this site runs on—to add new features, fix bugs here and there, and make it easier to deploy it across multiple websites—including mine and client sites. The system keeps getting better and better and I repeatedly ask myself whether I'm reinventing the wheel in many areas. I make use of great third-party packages developed with care and thoughtfulness by other developers. It's incredible when they work but awful when other developers stop updating and your code breaks. I now pay more and more attention to those GitHub starts (★) and pick carefully what packages to implement. Software rots.

I've learned a lot about managing my own Linux machines—both from scratch or from an existing image—to have a new site ready within minutes (at least, when I don't hit an unknown problem that steals a few hours from my day). I'm mainly deploying apps with Nginx and Laravel but I've also learned to run and deploy Node.js apps (with PM2), how to serve Docker images, or how to run Python and Golang programs.

I'm trying to thoroughly document all the troubleshooting I go through to not have to dig the internet to fix a bug I've fixed some time before. While it's obvious how to fix a bug you encountered yesterday, some bugs don't show up again for long, and you can save hours of work by keeping good notes.

A recent practice I've started playing with is creating automation files. I'm slowly getting acquainted with "Makefiles," text files that describe commands which are a list of command calls that will be executed after you type make command-name on your Terminal. These commands not only run on Linux machines but on the macOS terminal, so I can run most of my automation scripts both on the desktop and Linux servers. Here's a sample Makefile to setup a Digital Ocean droplet.

I build Folio mainly for myself. There are many systems like it but this one is entirely built by me, and it helps me learn new programming features as well as modern patterns and techniques by browsing other people's code when I use their packages. Many will hate Folio—simply because it runs on PHP—but I believe Laravel is making this programming language great again. (Trust me, I did PhpNuke sites back in 2003 and this is light years ahead.) Laravel feels like an updated version of Ruby on Rails.

I'm migrating most of my sites to Digital Ocean. Their droplet system (without hidden costs) is great. I'm still to see where to put Getting Simple's podcast audio files. A powerful package by the Spatie team makes backing up websites a breeze: I can schedule automatic database and file backups at desired time intervals, even uploading them to multiple disks (such as Digital Ocean Spaces, Dropbox, or Amazon S3).

I've recently started using Imgix to distribute my images and remove that load from website servers. The image-processing REST API they offer makes is flexible and removes many headaches and time lost manually editing images with Photoshop or other applications, might it be to apply simple effects, sharpening, resizing, or even adding watermarks or padding. And their CDN makes their distribution faster.

I rely less and less on TypeKit for web fonts, as I either serve the font files or use Google Fonts. There are also beautiful typefaces from type foundries that I might use soon. (Milieu Grotesque comes to mind.)

A big highlight (that sadly only runs on macOS) is Laravel Valet. After spending months learning how to configure Nginx blocks and crashing the system multiple times I found this simple tool that handles everything for you. There's a bit of reading to do to fully understand what it does but I'd summarize its benefits to two commands: valet link and valet unlink. With one command your computer serves a PHP app at http://app-name.test from the directory in which you run the command and the second command stops serving it. You can quickly register a folder to serve an app (say, with valet link nono) and quickly go to its URL to test the site locally (at http://nono.test). Behind the scenes, Valet uses dnsmasq, php, and nginx. Not having to deal with them on a daily basis makes things easy (even though I like to learn what's behind these systems and how to do it the manual way in case there's a need for it).

Another thing I'm loving are Cron jobs. They can be set either on Linux (as I do on Digital Ocean) or macOS with crontab -e. You also have to learn a bit about how Cron works but the short story is that it lets you schedule tasks: command executions at whatever time interval (or time of the day) you want. For instance, * * * * * curl https://google.com would ping Google every minute. And you can go from every minute to certain times of the day or fractions of the hour. Laravel copes with it by letting you schedule commands with a clean, high-level API (for instance, ->everyMinute() or ->dailyAt('17')). All you do is set a Cron job to execute the Laravel schedule every minute and Laravel decides what commands to run when.

Last but not least, I'd highlight the importance of logging. Most environments have ways to log executions and errors and this is the easiest way to find out what's breaking your system. I've added a route in Folio's administration in order to visualize logs from anywhere and, at a lower level, Nginx lets you log access and errors as well.

I'm constantly learning, and Folio and my website are my playground.

As I started saying, these are loose thoughts of many of the tech I've been exploring over the past year. I'm also learning about Docker, TensorFlow, Runway, and much more, and frequently keeping my notes on Dropbox Paper. Some of my experiments are kept in the open on GitHub, and I've recently started sharing what I'm learning on this YouTube playlist.

What are you tinkering with?

MARCH 18, 2020

try:
    %tensorflow_version 2.x
except Exception:
  pass

import tensorflow as tf

Note that %tensorflow_version is only available in Colab and not in regular Python.

FEBRUARY 26, 2020

I was getting this error in Laravel as I created a new app using the latest version (that is 6.2). Not sure why, the class would work locally but not remotely (on a deployment running in Ubuntu 18.04.3 on DigitalOcean to be precise).

I was using \ResourceBundle::getLocales('') to get a list of valid locales present on a website and then using in_array($needle, $array) to check whether a given locale is valid in PHP.

Here's how I fixed it.

  • composer require symfony/intl to install Symfony's Intl component.
  • Replaced my in_array calls with \Symfony\Component\Intl\Locales::exists($translation).

FEBRUARY 21, 2020

In C#, .NET, and Visual Studio, you can use the Random Class to generate random numbers.

First, you create a random number generator.

var random = new Random(1); // where 1 is our seed

Then you request the next random number (or next random double).

// Next random number
var aRandomNumber = random.Next();

// Next random double
var aRandomDouble = random.NextDouble();

FEBRUARY 13, 2020

Set editor to nano (or your editor of choice).

export EDITOR=nano
crontab -e

Edit the file, for instance, create a folder with a timestamp every minute in your desktop folder.

* * * * * cd ~/Desktop && mkdir `date +\%y\%m\%d_\%H\%M\%S`

FEBRUARY 10, 2020

Let say you stage all your Git changes and then commit them.

git add --all
git commit -m "Edit REDME.md"

There's a typo on REDME — should read README — and we want to "amend" this error.

git commit --amend

The commit --amend command lets you edit the commit message in the vim editor.

You can also change the message by specifying the new message in the command line with the -m argument.

git commit --amend -m "Edit README.md"

As the commit message is part of the commit itself, editing the message alters the commit hash, which means that if you've already pushed a commit to a remote, the remote won't let you push the new edit directly. But you can force that to happen.

git push --force branch-name

FEBRUARY 6, 2020

npx and create-react-app make it easy to create a new app running React and TypeScript.

npx create-react-app my-app --template typescript

Then you go into the folder and run the app.

cd my-app
npm start

You can create a JavaScript (non-TypeScript) app by removing the last bit—--template typescript. And you can also run the app with yarn start.

If, as I was, you're not getting the app working, you might have an older global installation of create-react-app. In my case I installed it with npm install -g create-react-app (which I could verify by running create-react-app -V on the Terminal. To make sure npx uses the latest version of create-react-app you need to uninstall the global version installed with npm.

npm uninstall -g create-react-app

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, React, and TypeScript.

FEBRUARY 5, 2020

git push origin $(git branch | grep \* | cut -d ' ' -f2)

JANUARY 23, 2020

$output = preg_replace('!\s+!', ' ', $input);

From StackOverflow.

OCTOBER 11, 2019

If you want to add the cs files from one Visual Studio solution (or project) into another, without duplicating the cs files or moving them over, you can simply use the Add > Existing file.. option, making sure that instead of selecting the files and clicking Add (which would copy the files over to your project folder duplicating them) you click on the small arrow and click Add As Link. The files will be linked to your project as a reference to the other project, and editing them will change their code for both Visual Studio projects.

OCTOBER 8, 2019

The short answer is that you can't. But there's a workaround.

Let's say you have a newsletter design as follows, which shows the post image, followed by its title and full content.

*|RSSITEM:IMAGE|*

*|RSSITEM:TITLE|*

*|RSSITEM:CONTENT_FULL|*

The *|RSSITEM:IMAGE|* RSS merge tag will pull in the image and assign to the the created img element the width and height corresponding to the original size of the image. Something like this (which is an example extracted from my Mailchimp campaign using the design template above).

<img class="mc-rss-item-img" src="https://nono.ma/img/u/sketch-170202_cambridge-clary-st.jpg" height="1868" width="2500" style="height: 1868;width: 2500;">

The problem is that we don't want to explicitly specify the height and width values of our image, as its oversized compared to the width of our design, which in my example was 600px plus margins, which was leaving 564px for the width of the image itself.

Here's the workaround: You create the image's img HTML element yourself, formatting it as you want, and use the RSSITEM:ENCLOSURE_URL—which according to Mailchimp "Displays the URL for the attached file [of your post]." In my case, the RSS feed is using the enclosure tag to send the image URL.

// RSS feed item enclosure tag
<enclosure url="https://nono.ma/img/u/sketch-170202_cambridge-clary-st.jpg" type="image/jpeg"/>

Then I can use that URL in my Mailchimp design as follows (also adding the post's title and full content below the image).

<img src="*|RSSITEM:ENCLOSURE_URL|*" style="max-width: 100%">

*|RSSITEM:TITLE|*

*|RSSITEM:CONTENT_FULL|*

SEPTEMBER 11, 2019

As a security mechanism, Windows natively lets you unzip files by right-clicking and choosing Extract All.. Well, you can skip the manual unblocking of the files by using 7-Zip to extract the files instead of Windows’ mechanism.

We want to get rid of a message you might have seen over and over when downloading files from the internet. "This file came from another computer and might be blocked to help protect this computer," next to an Unblock button. If you know about this, it's fine to unblock a couple files, but it's really annoying when you download a zip containing dozens of files which you have to unblock individually, one by one.

Download and install 7-Zip and unzip downloaded plugins with right-click > 7-Zip > Extract here, or any of the other extraction options.

This issue has given me a lot of headaches when installing both Dynamo and Grasshopper plugins, but you might run into this issue in other environments as well. I'm glad there's an alternative to unlocking each file--might it be DLL, gha, gh, dyn, PDF, exe or files in another formats--separately, unzipping a group of files, all unlocked, at once.

SEPTEMBER 10, 2019

By default NPM—the Node Package Manager—uses its own public registry (at https://registry.npmjs.org) to pull down packages when you run npm install or npm update inside of a project.

You can specify different registries at multiple levels or scopes to override these default value (and other configuration settings). Ideally, when working on projects that require a specific registry—due to security or maybe just because some packages are inside of a private repository—you would set NPM to look for packages on that registry inside of the project folder.

Command level (only affects the command itself)

npm install --registry=https://some.registry.url

Project level (these would be the contents of the .npmrc file inside of the project folder)

registry=https://some.registry.url
@namespace:registry=https://some.other.registry.url

Global (these would be the contents of the .npmrc file inside of your system user folder)

registry=https://some.registry.url
@namespace:registry=https://some.other.registry.url

How to check your configuration is working

Run the npm config list command to see what variables are set. If you run this in a project folder with an .npmrc file you should see its overriding configuration settings in the folder, and the system global settings after that.

If you run the command in any other folder (without an .npmrc file with custom settings) you should see whatever is set in the global .npmrc and then the default configuration values of NPM.

SEPTEMBER 10, 2019

A cheat-sheet for mathematical notation in [JavaScript] code form.

SEPTEMBER 9, 2019

Today I've automated the backup of the configuration, database, and static files of all the websites I manage. Two hours and a half that will save me a lot of time in the coming future, and remove stress when weird things happen. The backup—of six websites in three different servers running Laravel—downloads a copy of the database, the .env files, and the static files (a zip with the contents of the public folder) of each site.

I'll probably open source these scripts in the near future.

One new thing I learned was creating bash functions, like this one.

# create a variable with current date, formatted as yymmdd_HHMMSS
DATE_NOW=$(date '+%y%m%d_%H%M%S') 

# function that zips something and removes it
zip_and_remove() { cd $1 && zip "$2.zip" $2 && rm $2 && cd .. }

# function that downloads a file via ssh then calls the previous one
download_zip_remove() { scp $1 $2/$3 && zip_and_remove $2 $3  }

# a function call
download_zip_remove root@192.168.1.2:/var/www/site.com/folio/.env $DESTINATION $(echo $DATE_NOW)_$SITENAME.env

SEPTEMBER 5, 2019

Revit allows you to import PDF and image files from its Ribbon menu. After clicking either button (Import PDF or Import Image) you get the same window, really, just with a different extension pre-selected.

You import a file and place it into a view, and if the file happens to be a PDF file you select a page and a rasterization resolution (in DPI).

Internally, Revit uses the PDFNet library made by PDFTron to manipulate PDF files. One of the operations it seems to use it for is to rasterize the contents of a PDF to be able to display it on a view. This process makes the image data (of the image itself or a rasterized PDF) available inside of Revit. By using the Revit Lookup add-in, I found that the ImageType class offers a GetImage() method which returns a Bitmap object containing that image data.


Remember that, when you pick an existing imported image or PDF, you are selecting what's called an object of the ImageInstance class, which wraps an ImageType object. So we need to get the ImageInstance element, access its ImageType, then get its image data.


Let's see the code.

// This code goes somewhere in your Revit add-in
// Prompt user to pick a Revit Element
Reference pickedObject = UIDoc.Selection.PickObject(Autodesk.Revit.UI.Selection.ObjectType.Element);

// Get Element
Element element = Doc.GetElement(pickedObject.ElementId);

// Get Element Type
ElementId elementTypeId = element.GetTypeId();
ElementType elementType = Doc.GetElement(elementTypeId) as ElementType;

// Do something if object selection exists
if (pickedObject != null)
{
    // Check if picked element is an ImageInstance
    if (element is ImageInstance)
    {
        // Cast types for ImageInstance and ImageType
        // ImageType Id = ImageInstance Id - 1
        ImageInstance image = element as ImageInstance;
        ImageType imageType = elementType as ImageType;

        // Get image properties
        var filename = image.Name; // eg. FloorPlan.pdf

        // Get imported file path from ImageType
        var filepath = imageType.get_Parameter(BuiltInParameter.RASTER_SYMBOL_FILENAME).AsString();
        var pixelWidth = imageType.get_Parameter(BuiltInParameter.RASTER_SYMBOL_PIXELWIDTH).AsInteger();
        var pixelHeight = imageType.get_Parameter(BuiltInParameter.RASTER_SYMBOL_PIXELHEIGHT).AsInteger();

        // Get image data
        Bitmap bitmap = imageType.GetImage();

        // Save image to disk
        bitmap.Save(@"C:\users\username\Desktop\image.bmp");
    }
}

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Revit, C#, TypeScript, and React.

SEPTEMBER 3, 2019

If you are inside of a function which uses a type parameter—T—you can do a Console.WriteLine, for instance, to inspect what type is being used on a given function call.

Console.WriteLine(typeof(T));
// System.object or other type

JULY 24, 2019

This example uses regular expressions (RegExp) to replace multiple appearances of a substring in a string.

const string = `The car is red. The car is black.`;

const replacedString = string.replace(/car|is/g, "·····");

console.log(replacedString);
// returns The ····· ····· red. The ····· ····· black.

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

JULY 24, 2019

Here is an example of nested Promise.all() calls. We are using the Fetch API to load a given path or URL, then requesting the arrayBuffer() of each of the responses we get back. This is a trivial problem if we do it all asynchronously, but we want to do something with the output buffers when we have them all available, and not one by one.

Specifically, this code tries to (1) fetch an array of images; (2) get their array buffers; (3) then obtain their base64 representation. In essence, map an array of images (by providing their paths or URLs) to their corresponding base64 string.


While this technique works in both TypeScript and JavaScript, the code is only shown in TypeScript.

Approach 1: Verbose

const images = [/* Array of image URLs (or local path if running in Electron) */]

Promise.all(images.map((url) => fetch(url))).then((responses: any) => {

    return Promise.all(responses.map((res: Response) => res.arrayBuffer())).then((buffers) => {
        return buffers;
    });

}).then((buffers: any) => {

    return Promise.all(buffers.map((buffer: ArrayBuffer) => {
        return this.arrayBufferToBase64(buffer);
    }));

}).then((imagesAsBase64: any) => {

    // Do something with the base64 strings
    window.console.log(imagesAsBase64);

});

Approach 2: Simplified

const layerImages = [/* Array of image URLs (or local path if running in Electron) */]

Promise.all(layerImages.map((url) => fetch(url))).then((responses: any) => {

    return Promise.all(responses.map((res: Response) => res.arrayBuffer())).then((buffers) => {
        return buffers.map((buffer) => this.arrayBufferToBase64(buffer));
    });

}).then((imagesAsBase64: any) => {

    // Do something with the base64 strings
    window.console.log(imagesAsBase64);

});

Array Buffer to base64

// source: stackoverflow.com
private arrayBufferToBase64(buffer: any) {
    let binary = "";
    const bytes = [].slice.call(new Uint8Array(buffer));
    bytes.forEach((b: any) => binary += String.fromCharCode(b));
    // Inside of a web tab
    return window.btoa(binary);
}

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

JUNE 10, 2019

#!/bin/bash

#!make
include .env
export $(shell sed 's/=.*//' .env)

MAY 17, 2019

Yay! Visual Studio just announce a preview release of "extension [that] let you work with [Visual Studio] Code over SSH on a remote machine or VM, in Windows Subsystem for Linux (WSL), or inside a Docker container."

Go ahead and install the Remote Development extension. "An extension pack that lets you open any folder in a container, on a remote machine, or in WSL and take advantage of VS Code's full feature set."

MAY 15, 2019

A while ago we had the need to grab the App window variable (exposed by CefSharp) and we were extending the Window interface to do that. There seems to be a better way to get variables that are define in the Window environment.

I learned this from this link An advanced guide on how to setup a React and PHP.

If you are defining a variable or object you want to read from React (like in CefSharp, o directly in the HTML like in the screenshot)

// inside of your app entry HTML file's header
<script>
var myApp = {
  user: "Name",
  logged: true
}
</script>

You can do a declare module 'myApp' in index.d.ts, then add the myApp variable as a external library in Webpack's config file

externals: {
  myApp: `myApp`,
},

Then you can import as if it was a module in TypeScript (or JavaScript files) with React

import myApp from 'myApp';

And you can even use TypeScript destructuring technique to get internal properties directly

const { user: { name, email}, logged } = myApp;

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

FEBRUARY 25, 2019

In trying to export my React's Redux store from index.tsx to be used somewhere else outside of the React application, I was getting an Invariant Violation: Target container is not a DOM element error while running Jest tests (with Enzyme and Webpack) in the App component (App.tsx).

I found a solution to this error for my use case, which was using the same Redux store React is using outside of React.

The error

The initial code that didn't work when testing React looked like this.

// index.tsx

import * as React from "react";
import { render } from "react-dom";
import { Provider } from "react-redux";
import { applyMiddleware, compose, createStore } from "redux";
import App from "./components/App";
import { rootReducer } from "./store/reducers";
import { initialState } from "./store/state";

const middlewares = [];

export const store = createStore(
    rootReducer,
    initialState,
    compose(applyMiddleware(...middlewares)),
);

render(
    <Provider store={store}>
        <App />
    </Provider>,
    document.getElementById("root"),
);

The solution

Separate the Redux store logic into a new file named store.ts, then create a default export (to be used by index.tsx, i.e., the React application) and a non-default export with export const store (to be used from non-React classes), as follows.

// store.ts

import { applyMiddleware, compose, createStore } from "redux";
import logger from "redux-logger";
import { rootReducer } from "./store/reducers";
import { initialState } from "./store/state";

const middlewares = [];

export const store = createStore(
    rootReducer,
    initialState,
    compose(applyMiddleware(...middlewares)),
);

export default store;
// updated index.tsx

import * as React from "react";
import { render } from "react-dom";
import { Provider } from "react-redux";
import App from "./components/App";
import store from "./store";

render(
    <Provider store={store}>
        <App />
    </Provider>,
    document.getElementById("root"),
);

Using the Redux store in non-React classes

// MyClass.ts

import { store } from "./store"; // store.ts

export default class MyClass {
  handleClick() {
    store.dispatch({ ...new SomeAction() });
  }
}

The default export

A small note before you go. Here is how to use the default and the non-default exports.

  • default export store; is used with import store from "./store";
  • export const store = ... is used with import { store } from "./store";

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

DECEMBER 18, 2018

Here are my highlights of Deep dive into Electron’s main and renderer processes by Cameron Nokes.

[Each of these processes is] an operative system level process, or as Wikipedia puts it "an instance of a computer program that is being execute."

[…] Each of these processes run concurrently to each other. […] [M]emory and resources are isolated from each other. […] The two processes don't share memory or state.

Why multiple processes?

Chromium runs each tab in a separate process so that if one tab runs into a fatal error, it doesn't bring down the entire application. […] "Chromium is built like an operative system, using multiple OS processes to isolate web sites from each other and from the browser itself."

Main process

[I]s responsible for creating and managing BrowserWindow instances and various application events. It can also register global shortcuts, creative native menus and dialogs, respond to auto-update events, and more. Your app's entry point will point to a JavaScript file that will be executed in the main process. A subset of Electron APIs are available in the main process, as well as all Node.js modules. The docs state: “The basic rule is: if a module is GUI or low-level system related, then it should be only available in the main process.” (Note that GUI here means native GUI, not HTML based UI rendered by Chromium). This is to avoid potential memory leak problems.

Renderer process

The render process is responsible for running the user-interface of your app, or in other words, a web page which is an instance of webContents. All DOM APIs, Node.js APIs, and a subset of Electron APIs are available in the renderer. […] [O]ne or more webContents can live in a single window […] because a single window can host multiple webviews and each webview is its own webContents instance and renderer process.


See this Venn diagram of Electron (provided by the source).


How do I communicate between processes?

Electron uses interprocess communication (IPC) to communicate between processes—same as Chromium. IPC is sort of like using postMessage between a web page and an iframe or webWorker […] you send a message with a channel name and some arbitrary information. IPC can work between renderers and the main process in both directions. IPC is asynchronous by default but also has synchronous APIs (like fs in Node.js).

Electron also gives you the remote module, which allows you to, for example, use a main process module like Menu as if it were available in the renderer. No manual IPC calls [are] necessary, but what's really going on behind the scenes is that you are issuing commands to the main process via synchronous IPC calls. (These can be debugged with devtron.)

Can I make something work in both the main and renderer?

Yes, because main process APIs can be accessed through remote, you can do something like this:

const electron = require('electron');
const Menu = electron.Menu || electron.remote.Menu;

//now you can use it seamlessly in either main or renderer

console.log(Menu);

(See the full thing.)

DECEMBER 7, 2018

Here's a note on how to display dialogs, alerts, and notifications on macOS with AppleScript, useful to automate day-to-day tasks you do with your machine, or even create complex programs.

(To the uninitiated, you would run this code by opening the AppleScript Editor (on macOS), pasting the code there, and hitting run.)

Dialog and Alert1

display alert "This is an alert" buttons {"No", "Yes"}
if button returned of result = "No" then
	display alert "No was clicked"
else
	if button returned of result = "Yes" then
		display alert "Yes was clicked"
	end if
end if

System notification

display notification "Have a simple day!"

Want to see older publications? Visit the archive.

Listen to Getting Simple .