Nono.MA

FEBRUARY 25, 2021

Batch-Export PowerPoint Slides to Images Programmatically

unoconv is a tool to "convert between any document format supported by OpenOffice," available to install via Homebrew on macOS. You can convert, for instance, ppt files to png images (or to a multi-page PDF files) by running a command with this command-line interface program. The project is open source and you can browse its code on GitHub.

Install unoconv with Homebrew

brew install unoconv

Common issues: LibreOffice not found on your system

I ran into this issue when I first ran the unoconv command.

unoconv
# unoconv: Cannot find a suitable office installation on your system.
# ERROR: Please locate your office installation and send your feedback to:
#        http://github.com/dagwieers/unoconv/issues

That's because unoconv can't find libreoffice. You can install its Homebrew Cask.

brew install --cask libreoffice

After doing that, unoconv can find the libreoffice installation.

unoconv
# unoconv: you have to provide a filename or url as argument
# Try `unoconv -h' for more information.

Export PowerPoint Slides to PDF

unoconv slides.pptx -f pdf

Convert PDF to PNG or JPG Images

Even though you can directly export a PowerPoint presentation to JPEG or PNG format, unoconv exports only the first page by default.

You can use ImageMagick's convert tool to rasterize the PDF pages as images.

convert -density 300 slides.pdf image%d.jpg

Batch-convert Presentations to Images

Here's a bash script that will convert all ppt presentations in a folder to jpg images by folders.

# Convert all pptx files to multi-page pdf files
unoconv -f pdf *.pptx

# Loop through pptx files
for f in *.pptx
do
    echo "${f}.."
    mkdir -p ${f}-jpg
    convert -density 20 ${f%.*}.pdf "./${f}-jpg/image%d.jpg"
done

Available formats

You can see the extensive list of supported input and output formats on unoconv's documentation and read more about how to use unoconv in its help manual page or by running unoconv -h.

FEBRUARY 24, 2021

To read environment variables from a Python script or a Jupyter notebook, you would use this code—assuming you have a .env file in the directory where your script or notebook lives.

# .env
FOO=BAR
S3_BUCKET=YOURS3BUCKET
S3_SECRET_KEY=YOURSECRETKEYGOESHERE
# script.py
import os
print(os.environ.get('FOO')) # Empty

But this won't return the value of the environment variables, though, as you need to parse the contents of your .env file first.

For that, you can either use python-dotenv.

pip install python-dotenv

Then use this Python library to load your variables.

# Example from https://pypi.org/project/python-dotenv/
from dotenv import load_dotenv
load_dotenv()

# OR, the same with increased verbosity
load_dotenv(verbose=True)

# OR, explicitly providing path to '.env'
from pathlib import Path  # Python 3.6+ only
env_path = Path('.') / '.env'
load_dotenv(dotenv_path=env_path)

# Print variable FOO
print(os.environ.get('FOO')) # Returns 'BAR'

Or load the variables manually with this script.

import os
env_vars = !cat ../script/.env
for var in env_vars:
    key, value = var.split('=')
    os.environ[key] = value

# Print variable FOO
print(os.environ.get('FOO')) # Returns 'BAR'

FEBRUARY 15, 2021

Here's how I installed pandoc on my MacBook Pro (13–inch, M1, 2020) to run with Rosetta 2 — not natively, but on the x86_64 architecture — until a universal binary for macOS is built that supports the arm64 architecture in new Appple Silicon Macs.

This guide may be used to install other non-universal brew packages.

# Install Homebrew for x86_64 architecture
# https://soffes.blog/homebrew-on-apple-silicon
arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
# Install pandoc using that version of Homebrew
arch -x86_64 /usr/local/bin/brew install pandoc

Outputs

==> Downloading https://homebrew.bintray.com/bottles/pandoc-2.11.4.big_sur.bottle.tar.gz
Already downloaded: /Users/nono/Library/Caches/Homebrew/downloads/34e1528919e624583d70b1ef24381db17f730fc69e59144bf48abedc63656678--pandoc-2.11.4.big_sur.bottle.tar.gz
==> Pouring pandoc-2.11.4.big_sur.bottle.tar.gz
🍺  /usr/local/Cellar/pandoc/2.11.4: 10 files, 146.0MB
# Check pandoc's version
arch -x86_64 pandoc --version

Outputs

pandoc 2.11.4
Compiled with pandoc-types 1.22, texmath 0.12.1, skylighting 0.10.2,
citeproc 0.3.0.5, ipynb 0.1.0.1
User data directory: /Users/nono/.local/share/pandoc or /Users/nono/.pandoc
Copyright (C) 2006-2021 John MacFarlane. Web:  https://pandoc.org
This is free software; see the source for copying conditions. There is no
warranty, not even for merchantability or fitness for a particular purpose.

Converting Markdown to Html

arch -x86_64 pandoc sample.md -o sample.html

Contents of sample.md:

# Hello, Apple Silicon!

- Pandoc
- seems
- to
- work.

Contents of sample.html:

<h1 id="hello-apple-silicon">Hello, Apple Silicon!</h1>
<ul>
<li>Pandoc</li>
<li>seems</li>
<li>to</li>
<li>work.</li>
</ul>

FEBRUARY 9, 2021

When running any git command — including git pull, git push, git status, etc. — I was getting this error on macOS Big Sur.

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

The message means the Xcode Developer Tools are not properly installed, and you need to run the following command to fix this.

xcode-select --install

A window will prompt you to Install the Developer Tools. After two minutes, a message saying "The software was installed" showed up on my machine, a MacBook Pro (13-inch, M1, 2020). I was good to go.

JANUARY 20, 2021

Supposing you've started your container with ./docker-wine wine notepad and saved your files to your volume, for instance, at My Music folder with the new.txt name.

docker cp wine:/home/wineuser/new.txt ~/Desktop/

JANUARY 19, 2021

Here's how to execute a deployed AWS Lambda function with the AWS command-line interface.

Create a payload.json file that contains a JSON payload.

{
  "foo": "bar"
}

Then convert the payload to base64.

base64 payload.json
# returns ewogICJmb28iOiAiYmFyIgp9Cg==

And replace the contents of payload.json with that base64 string.

ewogICJmb28iOiAiYmFyIgp9Cg==

Invoke your Lambda function using that payload.

aws lambda invoke \
--function-name My-Lambda-Function-Name \
--payload file://payload.json \
output.json

The request's response will be printed in the console and the output will be saved in output.json.

If you're developing locally, you can use the aws lambda update-function-code function to synchronize your local code with your Lambda funciton.

JANUARY 14, 2021

This experiment is incomplete.

Kurt Schmucker says it should work as well as on your computer's hard drive as long as you're not doing too much read and write operations.

Schmucker recommends SSD2GO hard-drives. The SSD2GO PKT XT 1TB (350€) reads up to 1,024 MB/s. At a more affordable price point, is the SAMSUNG 7T 1TB ($160) which reads up to 1,050 MB/s and write up to 1,000 MB/s (on USB 3.2 gen 2 supported devices). The SAMSUNG T7 Touch 1TB ($190) ships with the same read and write capabilities as the T7, except for fingerprint protection—the disk has a fingerprint reader on top and I suspect it lets you unblock the disk only if you verify your fingerprint. At the highest price point—but the fastest speed—is the SAMSUNG X5 1TB ($400) with a read and write performance levels of up to 2,800 MB/s and 2,300 MB/s, respectively, and up to 40 Gb/s data transfer speeds. The tiny size and portability of the ADATA SE730H 512GB ($170) called my attention—you can bring it in your pocket—but it reads at 500 megabytes per second.

If I continue investigating down this path, I'll write down the hard drive I bought, the process to install and move Windows Parallels to the drive, and if it performs well.

If you want to move your Parallels VM to your external drive, follow Kurt Schmucker's guide.

JANUARY 13, 2021

Create A Private Key

openssl genrsa -out private.pem 4096

Create A Public Key

openssl rsa -in private.pem -out public.pem -outform PEM -pubout

Encrypt Files

openssl rsautl -encrypt -inkey public.pem -pubin -in file.txt -out file.ssl

Decrypt Files

openssl rsautl -decrypt -inkey private.pem -in file.ssl -out decrypted.txt

Notes

JANUARY 13, 2021

Create a GPG Key

gpg --full-generate-key

List Keys

gpg --list-keys
gpg --list-secret-keys --keyid-format LONG

Encrypt a File

gpg --output file.gpg --encrypt --recipient mundowarezweb@gmail.com file.txt

Decrypt a File

gpg --output file.txt --decrypt file.gpg

Exporting a Public Key

From https://www.gnupg.org/gph/en/manual/x56.html.

In binary format (inconvenient to be public on the web or sent via email).

gpg --output nono.gpg --export mundowarezweb@gmail.com

In plain-text format.

gpg --armor --export mundowarezweb@gmail.com

In plain-text format, saved to a file.

gpg --armor --output nonos-key.gpg --export --recipient mundowarezweb@gmail.com
gpg --armor --export --recipient mundowarezweb@gmail.com > nonos-key.gpg

JANUARY 13, 2021

I got this error while trying to pip3 install tensorflow. I tried python3 -m pip install tensorflow as well — it didn't work.

ERROR: Could not find a version that satisfies the requirement tensorflow
ERROR: No matching distribution found for tensorflow

As was my case, the reason for this error might be that you are using pip from a Python version not yet supported by any version of TensorFlow. I was running Python 3.9 and TensorFlow only had compatibility up to Python 3.8. By creating a new environment with Python 3.8 (or reverting the current environment to use 3.8) I could pip3 install tensorflow successfully.

JANUARY 8, 2021

About six months ago, Microsoft launched Pylance, a "fast and feature-rich language support for Python," available in the Visual Studio Code marketplace.

Pylance depends on our core Python extension and builds upon that experience, for those of you who have already installed it.

Among its main features are type information, auto-imports, multi-root workspace support, and type checking diagnostics.

The name Pylance serves as a nod to Monty Python’s Lancelot, who is the first knight to answer the bridgekeeper’s questions in the Holy Grail.

DECEMBER 2, 2020

Why Spotify kept removing my show

I fixed a bug that sporadically made Spotify remove my show, the Getting Simple podcast, from its platform without any logical explanation and, more worrisome, without warnings or notifications.


Some time ago, I noticed the podcast's RSS feed displayed episode release dates localized in Chinese and other languages. Something that, to my eyes, seemed random. Yesterday, I finally identified the issue.

The XML feed is cached for thirty minutes at a time — a duration I set to avoid overloading the server by re-generating the feed on every request.

But this feed re-generation used the requesting party's "locale." This code corresponds to the language and region configured in the system that performs a web request. For instance, the en-US locale represents a visitor or bot configured to use the English language and the United States region. A localized site — that can adjust its content to different locales — would display a date as Wed, 02 Dec 2020 for en-US visitors and as Mié., 02 Dic. 2020 for es-ES visitors.

date('D, d M Y H:i:s O');
// returns "Wed, 02 Dec 2020 05:19:14 -0500"

This date('D, d M Y H:i:s O') PHP method uses the operating system's language and region to determine what to display, but a localized website can adjust to the visitor's locale or even comply with explicit requirements.

App::setLocale('en-US'); // force locale to en-US

Item::formatDate(Date::now(), 'D, d M Y H:i:s O')
// returns "Wed, 02 Dec 2020 05:22:55 -0500"

App::setLocale('es-ES'); // force locate to es-ES

Item::formatDate(Date::now(), 'D, d M Y H:i:s O')
// returns "Mié., 02 Dic. 2020 05:22:55 -0500"

The issue was that the re-generation of the podcast feed was dependent on the requesting agent's locale when the cache expired, which could be any user or bot. Spotify was pinging the podcast and could load a feed generated by an agent that used a locale other than English in the past thirty minutes.

App::setLocale('en-US');
// Generate episode timestamps here

When Spotify found dates were not in English, it removed the show altogether—something that Apple Podcasts and other networks didn't do—and then added the podcast back hours later, when episode dates were in English again.

Spotify player showing a Getting Simple episode.

Spotify took its time to reload all existing episodes after forcing the localization of episode timestamps to use the en-US locale and re-generating the feed. Now all episodes and their stats are back. Hopefully, the show won't disappear again, and users won't hit this ugly, erroring embedded player.

NOVEMBER 27, 2020

In DigitalOcean, running the do-release-upgrade command was returning the following message.

Checking for a new Ubuntu release
Please install all available updates for your release before upgrading.

Install all available updates

sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get dist-upgrade

Reboot the system

shutdown -r now

You could stop here if all you want is to install available updates. Read the warning below to make sure you don't break your live applications and whether this is the best approach you can take.

Upgrade Ubuntu

WARNING: Please read this article by DigitalOcean on the potential pitfalls of upgrading an existing installation with your applications running on it. Instead of upgrading in-place, the recommended approach is to migrate your applications by creating a new, fresh instance with Ubuntu 20.04 LTS instead of upgrading an existing one. (Run at your own risk!)

sudo do-release-upgrade

NOVEMBER 4, 2020

The Amazon Web Services (AWS) command-line interface — the AWS Cli — lets you update the code of a Lambda function right from the cli. Here's how.

aws lambda update-function-code \
--function-name my-function-name \
--region us-west-2 \
--zip-file fileb://lambda.zip

Let's understand what you need to run this command.

  • aws lambda update-function-code - to execute this command you need the awscli installed on your machine and your authentication information has to be configured to your account
  • --function-name - this is the name of an existing Lambda function in your AWS account
  • --region - the region in which your Lambda lives (in this case, it's Oregon, whose code is us-west-2, you can see a list of regions and their codes here)
  • --zip-file - this is the path to your zipped Lambda code with the fileb:// prefix, in the example, there's a lambda.zip file in the current directory, alternatively you can use the --s3-bucket and --s3-key to use a zip file from an S3 bucket)

After your function code has been updated, you can invoke the Lambda function to verify everything is working as expected.

If you want to learn more about this command, here's the AWS CLI command reference guide, and here's the free Kindle version. Among other things, it lets you create Lambda Layer versions, invoke functions, and much more.

OCTOBER 15, 2020

To zip a folder and its content with Terminal you can use the zip command in the cli1, and here are a few other goodies you can use to simplify your workflow, making sure the folder is compressed properly, all subdirectories are compressed recursively, and the zip filename is automatically set using the current folder.

zip -qr9 ../$(basename "$PWD").zip *

Anatomy of a zip command

  • zip is the command that archives files and folders
  • -qr9 are the arguments for the command
    • -q (or "quiet operation") is the argument to zip silently, without listing the files that are being added (useful for not to clutter Terminal or notebook outputs)
    • -r (or "recurse into directories") is the argument to zip everything in the directory recursively, and not only files at the first level
    • -9 (or "compress better") is the argument to trade speed for compression, the operation will take longer to complete but the compression will be better, use -1 to "compress faster" or -0 to "store only," without compression
  • ../$(basename "$PWD").zip is the argument that defines the name of your zip file
    • ../ specifies the file should be one level up in the directory structure
    • basename is a command to obtain the folder or file name of a path
    • pwd is a command to obtain the current path of the Terminal (in our case the folder path)
    • .zip specifies the file extension
  • * is the argument of the files to compress, the asterisk is to zip everything in the current directory (and subdirectories due to the -r argument) - you could list individual files instead (say, file.txt file.json file.png) or multiple wildcards (say, *.txt *.json *.png)

  1. Command-line interface. 

OCTOBER 13, 2020

To write (or save) text to a file using Python, you can either append text or overwrite all existing contents with new text.

Appending text

To append text, open the file in append mode, write to it to add lines of text, and close it.

file = open('/path/to/file.txt', 'a') # 'a' is append-to-end-of-file mode
file.write('Adding text to this document.')
file.close()

Overwriting text

You can also write the entire contents of the files, overwriting any existing content using the w mode instead of a.

file = open('/path/to/file.txt', 'w') # 'w' is overwrite mode
file.write('This will override any existing content in the text to this document.')
file.close()

Line breaks

You can use \r\n or \n and other codes to add line breaks to your document.

file = open('/path/to/file.txt', 'w') # 'w' is overwrite mode
file.write('First line.\nSecond line.\nThird line.\n\nNono.MA')
file.close()
# file.txt
First line.
Second line.
Third line.

Nono.MA

OCTOBER 8, 2020

To determine whether a file or directory exists using Python you can use either the os.path or the pathlib library.

The os library offers three methods: path.exists, path.isfile, and path.isdir.

import os

# Returns True if file or dir exists
os.path.exists('/path/to/file/or/dir')

# Returns True if exists and is a file
os.path.isfile('/path/to/file/or/dir')

# Returns True if exists and is a directory
os.path.isdir()

The pathlib library has many methods (not covered here) but the pathlib.Path('/path/to/file').exists() also does the job.

import pathlib

file = pathlib.Path('/path/to/file')

# Returns True if file or dir exists
file.exists()

SEPTEMBER 23, 2020

Linters analyze code to catch errors and suggest best practices (using the abstract syntax tree, or AST). (Function complexity, syntax improvements, etc.)

Formatters fix style. (Spacing, line jumps, comments, etc.)

SEPTEMBER 11, 2020

"Less than 50 days after the release YOLOv4, YOLOv5 improves accessibility for realtime object detection." Read the Roboflow post.

LAST UPDATED FEBRUARY 26, 2021

Here are resources that are helping me get started with machine learning, and a few that I would have loved to have known about earlier. I'll probably be updating this page with new resources from time to time.

Stanford Cheat Sheets

A summary of terms, algorithms, and equations. (I barely understand the equations.=) These sheets, developed by Afshine and Shervine Amidi, differentiate between artificial intelligence (AI), machine learning (ML), and deep learning (DL) but many concepts overlap with each other. See this Venn diagram.

Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

I highly recommend this book I'm going through at the moment, written by an ex-Googler who worked in YouTube's video-classification algorithm. It's dense but it introduces you to all relevant artificial intelligence, machine learning, and deep learning concepts, and guides you through preparing custom datasets to train algorithms, a bit of data science I guess. At the same time, it introduces you to three of the most-used machine learning frameworks—Sci-Kit Learn, Keras, and TensorFlow, being this last one the one I use on my day-to-day job developing and releasing machine learning models for production. Similar frameworks are Caffe or PyTorch, this one being used by Facebook developers. (Thanks to Keith Alfaro for the recommendation.)

Open-source code and tutorials

I got started with machine learning by trying open-source algorithms. It's common to visit the GitHub repository corresponding to a paper and give it a try. Two examples are Pix2Pix (2016) and EfficientDet (2020). You try to use their code as is, then try to use a custom dataset for training and see how the model performs for your needs.

TensorFlow re-writes many of these models and makes easy-to-follow tutorials.

  • Pix2Pix in TensorFlow Core - Made by the Google TensorFlow team, this tutorial offers you to View the code on GitHubDownload the Jupyter Notebook (written in Python) or Run the Notebook in Google Colab (where you can press a button in the cloud and see how each piece of Python code runs to understand the different parts of setting up and training an algorithm. Reading the dataset, peparing the training and validation set, creating the model, training it, and more).
  • TensorFlow tutorials - This is a good place to get your hands dirty. While machine learning has a strong theoretical component you can leave that aside and start by training and testing models for image classification, object detection, semantic image segmentation, and a lot more tasks.

Friendly user interfaces

  • Runway - A friend of mine, Cristóbal Valenzuela, is building his own machine learning platform for creatives. It's the place for people who don't know how to code (or don't want to) to be able to use complex machine learning models, training them with custom data and deploying them to the cloud. Here's an interview where he told me about the beginnings of Runway.
  • Machine Learning for Designers Talk - A talk I gave talking about these types of interfaces, a few projects, and the role they play for designers and people who don't know how to code.

Courses

Tutorials & live streams

  • Machine Learning Series YouTube playlist. Here is a compilation of some of the machine-intelligence-related video tutorials I've recorded.
  • Live Streams YouTube playlist. Weekly hands-on coding sessions on creative-coding, machine learning, art, design, and much more. From conceptual overviews to hands-on neural network architecture, automation, training, or cloud deployment.

Other resources

  • TensorFlow: Tensor and Image Basics - A video with basic tensor and image operations in TensorFlow. How to use tensors to encode images and matrices and visualize them.
  • TensorFlow: Visualizing Convolutions - A video to visualize the filters of an image convolution, an operation known for its ability to extract image features in an unsupervised way to perform classification tasks used in convolutional neural networks.
  • Awesome Machine Learning - A big and frequently-updated list of machine learning resources.
  • Suggestive Drawing - This is my Harvard's masters thesis, in which I explore how the collaboration between human and artificial intelligences can enhance the design process.

Found this post useful?

AUGUST 24, 2020

Apache Groovy (Groovy Lang) "is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity thanks to a concise, familiar and easy to learn syntax. It integrates smoothly with any Java program, and immediately delivers to your application powerful features, including scripting capabilities, Domain-Specific Language authoring, runtime and compile-time meta-programming and functional programming."

AUGUST 13, 2020

While macOS ships with Python 2 by default, you can install set Python 3 as the default Python version on your Mac.

First, you install Python 3 with Homebrew.

brew update && brew install python

To make this new version your default, you can add the following line to your ~/.zshrc file (or ~/.bashrc if you want to expose it in bash instead of zsh).

alias python=/usr/local/bin/python3

Then open a new Terminal and Python 3 should be running.

Let's verify this is true.

python --version # e.g. Python 3.8.5

How do I find the python3 path?

Homebrew provides info about any installed "bottle" via the info command.

brew info python
# python@3.8: stable 3.8.5 (bottled)
# Interpreted, interactive, object-oriented programming language
# https://www.python.org/
# /usr/local/Cellar/python@3.8/3.8.5 (4,372 files, 67.7MB) *
# ...

And you can find the path we're looking for grep.

brew info python | grep bin
# /usr/local/bin/python3
# /usr/local/opt/python@3.8/libexec/bin

Another way

You can also symlink python3 to python.

ln -sf /usr/local/bin/python3 /usr/local/bin/python

In case your /usr/local/bin/python3 is also symlinked, you can check where it's symlinked to with:

readlink /usr/local/bin/python3

In my case, it returns ../Cellar/python@3.9/3.9.1_6/bin/python3.

How do I use Python 2 if I need it?

Your system's Python 2.7 is still there.

/usr/bin/python --version # e.g Python 2.7.16

You can also use Homebrew's Python 2.

brew install python@2

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Python, and macOS.

JUNE 30, 2020

You can measure the time elapsed during the execution of TypeScript commands by keeping a reference to the start time and then subtracting the current time at any point on your program from that start time to obtain the time elapsed between two points in time.

const start = new Date().getTime();

// Run some code..

let elapsed = new Date().getTime() - start;

Let's create two helper functions to get the current time (i.e. now) and the elapsed time at any point from that moment.

// Returns current time
// (and, if provided, prints the event's name)
const now = (eventName = null) => {
    if (eventName) {
      console.log(`Started ${eventName}..`);
    }
    return new Date().getTime();
}

// Store current time as `start`
let start = now();

// Returns time elapsed since `beginning`
// (and, optionally, prints the duration in seconds)
const elapsed = (beginning = start, log = false) => {
    const duration = new Date().getTime() - beginning;
    if (log) {
        console.log(`${duration/1000}s`);
    }
    return duration;
}

With those utility functions defined, we can measure the duration of different events.

// A promise that takes X ms to resolve
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

// Measure duration (while waiting for 2 seconds)
(async function demo() {
    const waitInSeconds = 2;
    let beginning = now(`${waitInSeconds}-second wait`);
    // Prints Started 2-second wait..
    await sleep(waitInSeconds * 1000);
    elapsed(beginning, true);
    // Prints 2.004s
})();

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, React, and TypeScript.

JUNE 8, 2020

Just came across this machine learning (and TensorFlow) glossary which "defines general machine learning terms, plus terms specific to TensorFlow."

JUNE 2, 2020

In trying to use Artisan::call($command, $arguments) to execute a command exposed by my Laravel package—Folio—I was running into this issue.

The command "folio:clone" does not exist.

My commands were working on the terminal, by calling php artisan folio:clone, for instance, but they were not working programmatically, calling something like this.

Artisan::call('folio:clone 123 "New Title"');

Artisan::command was not a solution as it serves to register commands and not to execute them.

By looking into the FolioServiceProvider.php (the service provider of my own package) I noticed the $this->app->runningInConsole() check. My commands were being registered in the console but were not exposed elsewhere (that is, in the application itself).

I'd guess this is a security and performance measure. Commands that don't need to be available to the Laravel app are not registered for it.

Solution

The solution was simply registering the commands I want to be callable from my Laravel sites outside of the if statement that checks for $this->app->runningInConsole().

While eight commands are only available to run on the console, there's one available to both the console and the application's runtime.

if ($this->app->runningInConsole()) {
    $this->commands([
        \Nonoesp\Folio\Commands\GenerateSitemap::class,
        \Nonoesp\Folio\Commands\MigrateTemplate::class,
        \Nonoesp\Folio\Commands\TextAndTitleToJSON::class,
        \Nonoesp\Folio\Commands\ItemPropertiesExport::class,
        \Nonoesp\Folio\Commands\ItemPropertiesImport::class,
        \Nonoesp\Folio\Commands\ItemRetag::class,
        \Nonoesp\Folio\Commands\InstallCommand::class,
        \Nonoesp\Folio\Commands\CreateUserCommand::class,
    ]);      
}

$this->commands([
    \Nonoesp\Folio\Commands\ItemClone::class,
]);

In my case, I'm the maintainer of the package and could easily work around this limitation by taking the command I want to use in Laravel out of the if statement.

But you can register commands yourself in your app's $commands array in app/Console/Kernel.php. See the following example.

// app/Console/Kernel.php
protected $commands = [
    \Nonoesp\Folio\Commands\CreateUserCommand::class,
];

While the CreateUserCommand is only registered to the console by the package, I can explicitly make it available for my entire application calling it with Artisan::call('folio:user {email} {password}') (which is this command's signature).

Thanks!

I hope you found this useful. Feel free to ping me at @nonoesp, join the mailing list, or check out other Laravel posts and code-related publications.

MAY 14, 2020

I recently got Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron as a recomendation from Keith.

This second version updates all code samples to work with TensorFlow 2, and the repository that accompanies the book—ageron/handson-ml2—is also updated frequently to catch up with the latest updates.

Just the Python notebooks on that GitHub repository are super helpful to get an overall on state-of-the-art machine learning and deep learning techniques, from the basics of machine learning and classic techniques like classification, support vector machines, or decision trees to the latest techniques to code neural networks, customizing and trained them, loading and pre-processing data, natural language processing, computer vision, autoencoders and gans, or reinforcement learning.

MAY 13, 2020

#Graph2Plan

Nice work from Shenzhen, Carleton, and Simon Fraser Universities, titled Graph2Plan: Learning Floorplan Generation from Layout Graphs, along the lines of #HouseGAN. Via @alfarok.

Our deep neural network Graph2Plan is a learning framework for automated floorplan generation from layout graphs. The trained network can generate floorplans based on an input building boundary only (a-b), like in previous works. In addition, we allow users to add a variety of constraints such as room counts (c), room connectivity (d), and other layout graph edits. Multiple generated floorplans which fulfill the input constraints are shown.

Read the paper on Arxiv.

MAY 10, 2020

We propose In-Domain GAN inversion (IDInvert) by first training a novel domain-guided encoder which is able to produce in-domain latent code, and then performing domain-regularized optimization which involves the encoder as a regularizer to land the code inside the latent space when being finetuned. The in-domain codes produced by IDInvert enable high-quality real image editing with fixed GAN models.

MAY 5, 2020

Connect directly to RunwayML models with only a few lines of code to build web apps, chatbots, plugins, and more. Hosted Models live on the web and can be used anytime, anywhere, without requiring RunwayML to be open!

[…]

We've also released a JavaScript SDK alongside the new Hosted Models feature. Use it to bring a Hosted Model to your next project in just 3 lines of code.

APRIL 26, 2020

I managed to make this work by unlinking openssl.

https://github.com/wting/autojump/issues/540

Then reinstalling python.

brew reinstall python@2

I was having this issue when trying to install Google Cloud SDK. After doing the previous steps, I could run the installer without a problem.

./google-cloud-sdk/install.sh

Want to see older publications? Visit the archive.

Listen to Getting Simple .