Data professionals need to experiment, visualise their data, and separate their code into very different areas such as input processing, model training, prediction, etc. prython proposes a new paradigm to help them do that.
There is a standard mode where you connect panels to each other and a run a series of panels connected together. The other option is the free mode where everything is part of the same R or Python session (see how the connectors disappear here). This allows you to code faster as it avoids the need to recompute other panels.
Wondering how you can test different models/code using the same data? Here we have a data prep panel connected to two unrelated tasks: a forecasting model, and a time-series decomposition.
Here we bring three results (from three separate panels) into this panel. Every object created in them will be available in this last one. The execution order is marked with a number.
You can run only one panel, all panels that consume the outputs of this panel, or all panels that act as inputs to this panel
For each panel, you can see the plots and dataframes generated in it. Works will both R standard plots/ggplot , and Python matplotlib/seaborn
Every panel prints its results into a window that you can expand. You can also copy each one of them, and receive any new outputs there. Here we use it to compare two different ML models next to each other
You can add markers that can be clicked to redirect you to any part of the project
For each panel, everything in the environment up to when it ran is saved. It can then be used by attaching a console and running Python/R line by line.
Create panels that replicate the content of other panels. For example, if we have two datasets, and we want to run the same model (lower left panel) over both of them. We create a replica (check the green dotted line) of the model (lower right panel) but the input will be different . Changes done on the left panel will be reflected on the right one. Meant for testing how different inputs alter the results
Each cell in your notebook project gets loaded into a panel, and everything gets connected automatically
You can add floating notes, brackets, and frames to identify specific parts of your project
You can code using the panel or a larger editor with live highlighting
Here we run three panels (two in R and one in Python) at the same time. This is done by simply pressing "run" on each one of them. You can check their progress on the lower right corner
Array dimensions tracked for every panel (right of the panel). Easy to see how they grow, and get reshaped!
Take a screenshot with one click, and draw over your project/code
Created automatically by parsing your outputs
For each panel, you can save a snapshot of the result, which can be used to check the impact of changing your code. Here we had an original R vector, which we then transformed.
When doing Keras deep learning models, no need to write complex code to track the training in real time, nor use external tools such as Tensorboard
When having many dataframes in your project, it's hard to understand how they are related. You can use the Dataframe mode to organise your dataframes and document how they are related
Panels can be hidden to run your code without them. Here we are blocking/hiding the upper-right one.
You can click on any table to load the results automatically in Microsoft Excel
Organise your code into panels and visualize everything together
Here we are testing two deep learning models (left) and a logistic regression one in sklearn (right) on the famous iris dataset. The three of them can be executed with just one click and easily compared within the same screen
Displaying and managing multiple plots never got easier. Here we have a couple of standard plots and ggplots in R
Here we apply a series of filters to a data-frame in R. Each panel shows the status of the data-frame after each panel was executed
Send us an email to firstname.lastname@example.org
Or follow us on @prython