Planet Ubuntu-it è una finestra nel mondo
delle persone che ruotano attorno alla
comunità italiana di Ubuntu. Le opinioni
qui espresse sono quelle dei rispettivi
autori e non rappresentano la comunità
italiana di Ubuntu né tanto meno quella
internazionale. Se vi trovate in disaccordo
con quanto espresso da un autore, scrivete
direttamente a lui o attraverso questo
stesso Planet in modo corretto e sempre
conforme al Codice di Condotta.
Potete seguire questo planet nei seguenti
formati: RSS 2.0, ATOM, OPML e FOAF.
Being a hero is nice, isn’t it? You work hard, single-handedly save the day, and your teammates are eternally grateful to you. However, such behavior is, in fact, highly problematic.
In Google, we have an internal document, quite often quoted, that says to let things fail if the workload to keep them alive is too high. From Google, many good practices have been exported and adopted in the IT world, and I believe this could be very useful to many teams.
The story of how “heroism is bad” is not new. The internal Google document has actually been a presentation at SRE TechCon 2015. And, other companies have a similar take. But let’s start from the principle.
What is the significance of heroism?
When we talk about a hero for the team, we mean a single person taking on a systemic problem to compensate for it. No matter the cost, how many hours are needed, or the consequences. Of course, thanks to the hero, the crisis due to the systemic issue is momentarily avoided.
And, heroes get immediate reward! They have completed a gigantic task, they get praised from teammates, and they feel crucial: after all, without them, everything would have failed.
Why being a hero is bad
Of course, we are talking about behavior in the working environment, not of heroic acts to actually save lives.
Heroic behavior, although quite encouraged in many realities, at the end of the day, brings more damage than benefits. It is bad for the individual: takes a toll on them, brings to burnout, creates unsatisfiable expectations (you did once, why don’t you do it again?), and puts a lot of pressure on them.
It is also bad for the team: it creates incorrect expectations from the team members, and it leads to inaction because the heroes will pick up anything left behind. Not only that, but it damages the morals because it makes it feel like everything has been solved by the hero, and everybody else has no purpose.
And it is bad for systems and processes: they don’t improve because teams don’t realize they are broken, given that at the end of the day, the heroes make everything work. No systemic fixes are carried on, ‘cause such problems are hidden due to the heroes efforts.
We work in teams for a reason: the idea that one person has the solution to everything is ridiculous.
What to do instead?
On YouTube, there is an astonishing TED talk by Lorna Davis, “A guide to collaborative leadership”. If you have 14 minutes of spare time today, you should really use them to watch it.
There is a key concept in the video: radical interdependence.
What’s radical interdependence?
Radical interdependence is just a fancy way to say that we need each other. In a complex, ever evolving, interconnected world, a single individual cannot change the world on its own.
And a team can delivery things that could be impossible to deliver by an individual alone.
However, quoting from the video:
Interdependence is harder than being a hero. It requires us to be open, transparent, and vulnerable. And that’s not what traditional leaders have been trained to do.
Let things fail
It’s hard, but nowadays, every time I feel the urge to step in to save the day, I stop a minute, and ask myself: “Can’t I sit down with the team and work this out?”.
And every so often, this requires things to fail: but that’s okay.
It is imperative that a good process for postmortems is in place, and that such process is blameless.
When things fail, we sit down all together, and we find out what went wrong, and what we can do better the next time. This allows improvements to the system, to the process, and at the end things are remarkably better than what they would have been if they hadn’t failed.
It’s a complex, difficult approach, but it provides significant improvements. Have you ever tried it, or plan to do so? Can you think of an occasion where it would have been useful? Let me know in the comments!
If you have any question, other than here in the comments you can always reach me by email at hello@rpadovani.com.
È disponibile la newsletter N° 008/2023 della comunità di ubuntu-it. In questo numero:
Canonical e il supporto per le CPU Intel Xeon di quarta generazione
Gli sviluppatori di Ubuntu stanno lavorando ad un nuovo programma di installazione
Incontra Canonical all'Embedded World 2023
Ecco come impostare il tuo server VPN
Canonical rilascia un importante aggiornamento di sicurezza per il kernel di Ubuntu
Rilasciata la versione di LibreOffice 7.5.1
Cosa significa ChatGPT per la comunità open source?
Aggiornamenti di sicurezza
Bug riportati
Statistiche del gruppo sviluppo
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell'archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
DS5QDR Heonmin Lee ha creato (da diverso tempo) questa applicazione grafica riferendosi al precedente programma python pyUC, che è un semplice programma front-end GUI (con interfaccia grafica), per gestire l'accesso alle reti digitali dei Radioamatori, direttamente dal proprio PC.
È disponibile la newsletter N° 007/2023 della comunità di ubuntu-it. In questo numero:
Ubuntu 22.04.2 LTS disponibile per il download
Le future versioni di Ubuntu non supporteranno Flatpak
È possibile installare il kernel Linux 6.2 su Ubuntu: ecco come!
Cos'è realmente Linux in real-time ? - Parte I
GNOME 44: le migliori nuove funzionalità
Rilasciata la versione del kernel Linux 6.2
Aggiornamenti di sicurezza
Bug riportati
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
Pochi giorni fa il Ministero dell'Industria e del Made in Italy (MIMIT) ha emanato il nuovo Decreto direttoriale che determina le modalità di svolgimento della sessione di esame per l'ottenimento della Patente di Radioamatore.
È disponibile la newsletter N° 006/2023 della comunità di ubuntu-it. In questo numero:
Canonical annuncia la disponibilità per il Kernel Linux in Real-Time
Disponibile sulla piattaforma The Construct il corso per la distribuzione di applicazioni ROS
Un ulteriore novità per Ubuntu 23.04
È disponibile la nuova versione di Mozilla Firefox
L'implementazione di Rust all'interno del Kernel Linux diventa sempre più robusta
Il Kernel Linux 6.1 sarà promosso a kernel LTS
Aggiornamenti di sicurezza
Bug riportati
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
È disponibile la newsletter N° 005/2023 della comunità di ubuntu-it. In questo numero:
Canonical rilascia una nuova patch di sicurezza per le versioni di Ubuntu LTS
Canonical entra a far parte dell'Academy Software Foundation
L'open source è davvero sicuro quanto il software proprietario?
GNOME è lavoro su una nuova interfaccia
Telefoni usati, da rifiuto a risorsa
Ecco le prime riflessioni della Linux Foundation riguardanti la COP27
Arriva un nuovo aggiornamento del client Steam
Aggiornamenti di sicurezza
Bug riportati
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
È disponibile la newsletter N° 004/2023 della comunità di ubuntu-it. In questo numero:
Multipass 1.11: scopriamo cosa c'è di nuovo
Ecco a voi LibreOffice 7.5
Rilasciato Proton 7.0-6 con supporto per UNCHARTED: Legacy of Thieves Collection
Amazon avverte i propri dipendenti di non utilizzare ChatGPT
Oracle è il maggior contributore del kernel Linux 6.1
Rilasciato l'ultimo aggiornamento del client Steam
Aggiornamenti di sicurezza
Bug riportati
Statistiche del gruppo sviluppo
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
È disponibile la newsletter N° 003/2023 della comunità di ubuntu-it. In questo numero:
Canonical annuncia ufficialmente la disponibilità di Ubuntu Pro
Ubuntu 23.04 integrerà Telegram come pacchetto Snap?
GNOME 43.3: arrivano ulteriori correzioni e miglioramenti
Full Circle Magazine Issue #189 in inglese
Ecco il nuovo driver grafico NVIDIA 525.85.05 per i sistemi Linux
Rilasciata quinta point release di LibreOffice 7.4.5
Microsoft conferma l'investimento verso OpenAI
Wine 8.0 su Ubuntu: un anno di sviluppo e di novità!
Aggiornamenti di sicurezza
Bug riportati
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
Da qualche settimana è uscito l'aggiornamento di sicurezza Windows per sistemi operativi 22H2 a 64 bit, che ho installato regolarmente senza alcun problema.
Di per se non ne avevo dato alcun peso, continuando ad utilizzare normalmente il mio PC Desktop, fino a quando non ho dovuto mettere mano alla programmazione di alcune radio di amici OM.
È disponibile la newsletter N° 002/2023 della comunità di ubuntu-it. In questo numero:
Il rilascio della seconda point release di Ubuntu 22.04 sarà in ritardo
Il futuro del file system ZFS su Ubuntu non sembra buono
Un interessante report da parte del team GNOME
Firefox 109: ecco le novità!
Scovato un nuovo malware nei sistemi Linux
Microsoft vorrebbe acquisire ChatGPT!
Aggiornamenti di sicurezza
Bug riportati
Statistiche del gruppo sviluppo
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
È disponibile la newsletter N° 001/2023 della comunità di ubuntu-it. In questo numero:
Xubuntu 23.04 offrirà un'immagine minimal ufficiale
Nuovi aggiornamenti di sicurezza per il Kernel Linux
Unity 7.7 e il tanto atteso supporto Wayland
Apre il concorso per gli sfondi di Ubuntu 23.04 "Lunar Lobster"
Come installare il Kernel Linux 6.1 su Ubuntu 22.10
Full Circle Magazine Issue #188 in inglese
GNOME 42.8: migliora il supporto Wayland
Aggiungere l'anteprima Markdown su Gedit
LibreOffice 7.4.4 rilasciato con più di 110 correzioni di bug
Il Kernel Linux 6.0 raggiunge l'End Of Life
Aggiornamenti di sicurezza
Bug riportati
Statistiche del gruppo sviluppo
Scrivi per la newsletter
Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.
Continuando a sperimentare con i miei hotspot pi-star, realizzati su Raspberry Pi di varie versioni, nei giorni scorsi mi è stato chiesto come gestire velocemente la connessione iniziale all'AP (Access Point) di un nuovo hotspot pi-star, utilizzando la connessione WFi via PC.
Il network BrandMeister ha recentemente implementato il protocollo digitale radioamatoriale YSF Direct, ovvero la possibilità sui master server che hanno attiva questa funzionalità di collegare direttamente un hotspot/ripetitore C4FM in modalità YSF al master senza necessità di “room” intermedie come da prassi.
Negli ultimi fine settimana (durante il mio poco tempo libero...) ho effettuato diverse prove di configurazione sul mio server Rpi 3 B+ dove ho già installato alcuni nodi ASL. Lo scopo delle mie modifiche è stato quello di aggiungere ed abilitare un nodo radio aggiuntivo, oltre a quelli già presenti e funzionanti
The Certified Kubernetes Administrator (CKA) exam evaluates your ability to operate a Kubernetes (K8s) cluster and your knowledge of how to run jobs over the cluster.
I recently passed the exam, so I would like to share some tips and tricks, and how I prepared for it.
Before taking the exam, I already had extensive experience using K8s, however always through managed services. I had to learn everything about how a K8s cluster is actually run.
I am grateful to my employer, Google, for providing me with the time to study and covering all expenses.
I used the course hosted over A Cloud Guru. It is up-to-date, and covers everything needed for the CKA. The course is very hands-on, so if you sign up for it, I recommend doing the laboratories (they provide the environment) because the exam is a long series of tasks similar to what you would be asked to do during the lab sections of this course.
Furthermore, you should learn how to navigate the K8s doc. You can use it during the exam, so there is no point in learning every single field of every single resource. Do you need to create a pod? Go to the documentation for it, copy the basis example, and customize it.
Simulation
When you register for the CKA certification exam on the Linux Foundation site, you will receive access to a simulated exam hosted by killer.sh. The simulation is significantly more difficult and longer than the actual exam: I attempted a simulation two days before the exam, and I achieved 55%. On the actual exam, 91%.
I highly recommend taking the simulation, not only to see how prepared you are, but to familiarize yourself with the exam environment.
The exam
The exam is 2 hours long, with a variable number of tasks. Points are also assigned based on how many tasks of a question you answered. You need a score of 66% to clear the exam.
During the exam you can access K8s documentation, use this power to your advantage. The environment in which the exam will run is based on XFCE. You will have a browser, a note pad, and a terminal you can use.
Being familiar with vi can help you work more quickly, especially if you spend a lot of time in there, and you just use the browser to read the documentation.
You can use the note pad to keep track of which questions you skipped, or you want to revise later.
During your exam, you will work on multiple K8s clusters. Each question is about one cluster, and at the beginning of the question, it says which context you should use. Don’t skip to answer the question without having switched context!
Requirements
To attend the exam, you will have to download a dedicated browser, called PSI Safe Browser. You cannot download this in advance, it will be unlocked for you only 30 minutes before the beginning of the exam.
Theoretically, it is compatible with Windows, macOS, and Linux: however, many users have problems under Linux. Be aware that there could be issues, and have a macOS or Windows machine available, if possible, as a backup plan.
This browser will make sure that you are not running any background application, and that you don’t have multiple displays (only one monitor, sorry!).
After you installed the browser, you will have to identify yourself with some official document. You will have to show that you are alone in the room, that your desk is clean, that there is nothing under your desk, and so on. You will have to show your smartphone, and then show that you place it in some unreachable place. The whole check-in procedure takes time, and it is error-prone (I had to show the whole room twice, then the browser crashed, and I had to start from scratch again).
The onboarding procedure is the most stressful part of the exam: read all the requirements on the Linux Foundation Website, try to clear your desk from any unrelated thing, and start the procedure as soon as possible. In my case, it took almost 50 minutes.
Their browser is not a perfect piece of software: in my case, the virtual environment inside this browser had a resolution of 800x600, making it impossible to have two windows on a side-by-side. However, you spend the huge majority of your time in the terminal, and sometimes on the browser to copy-paste snippets from the browser.
Tips
Keep a Windows or macOS machine nearby, if Linux doesn’t work for you;
Answering a question partially is better than not answering at all;
Always double-check the K8s context! First thing you must do for each question, it is switching the context of your kubectl according to instructions;
Create files for each resource you create, so you can go back and adjust stuff. Moreover, if you have time at the end of the exam, makes way easier to check what you have done;
Remember to have a valid ID document with you for the exam check-in;
Conclusion
Time constraints and the unfamiliar environment will be your greatest challenges: however, if you have used K8s in production before, you should be able to clear the exam without any major difficulty. Spending sometime training before is strongly suggested, no matter your level of expertise, just to understand the format of the exam.
Good luck, and reach out if you have any question, here in the comments or by email at hello@rpadovani.com.
Qui trovate le informazioni con le quali installerete la nuova versione 7.2+ di Supermon (versione ASL), denominata Fresh-Install. Questa installazione comprende tutti i file di configurazione standard di Supermon.
To create a Kubernetes deployment, we must specify the matchLabels field, even though its value must match the one we specify in the template. But why? Cannot Kubernetes be smart enough to figure it out without us being explicit?
Did you know? K8s is short for Kubernetes ‘cause there are 8 letters between K and S.
A Kubernetes (K8s) Deployment provides a way to define how many replicas of a Pod K8s should aim to keep alive. I’m especially bothered by the Deployment spec’s requirement that we must specify a label selector for pods, and that label selector must match the same labels we have defined in the template. Why can’t we just define them once? Why can’t K8s infer them on its own? As I will explain, there is actually a good reason. However, to understand it, you would have to go down a rabbit hole to figure it out.
A deployment specification
Firstly, let’s take a look at a simple deployment specification for K8s:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:replicas:3selector:matchLabels:app:nginx# Why can't K8s figure it out on its own?template:metadata:labels:app:nginxspec:containers:-name:nginximage:nginx:latest
This is a basic deployment, taken from the official documentation, and here we can already see that we need to fill the matchLabels field.
What happens if we drop the “selector” field completely?
➜ kubectl apply -f nginx-deployment.yaml
The Deployment "nginx-deployment" is invalid:
* spec.selector: Required value
Okay, so we need to specify a selector. Can it be different from the “label” field in the “template”? I will try with:
matchLabels:app:nginx-different
➜ kubectl apply -f nginx-deployment.yaml
The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx"}: `selector` does not match template `labels`
There are usually good reasons behind what it seems a not well-thought implementation, and this is true here as well, as we’ll see.
As expected, K8s doesn’t like it, it must match the template. So, we must fill a field with a well-defined value. It really seems that a computer could do that for us, why do we have to specify it manually? It drives me crazy having to do something a computer could do without any problem. Or could it?
Behind a deployment
Probably, you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.
How does a deployment work? Behind the curtains, when you create a new deployment, K8s creates two different objects: a Pod definition, using as its specification what is available in the “template” field of the Deployment, and a ReplicaSet. You can easily verify this using kubectl to retrieve pods and replica sets after you have created a deployment.
A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
A ReplicaSet needs a selector that specifies how to identify Pods it can acquire and manage: however, this doesn’t explain why we must specify it, and why K8s cannot do it on its own. In the end, a Deployment is a high-level construct that should hide ReplicaSet quirks: such details shouldn’t concern us, and the Deployment should take care of them on its own.
Digging deeper
Understanding how a Deployment works doesn’t help us find the reason of this particular behavior. Given that Googling doesn’t seem to bring any interesting result on this particular topic, it’s time to go to the source (literally): luckily, K8s is open source, so we can check its history on GitHub.
Going back in time, we find out that, actually, K8s used to infer it the matchLabels field! The behavior was removed with apps/v1beta2 (released with Kubernetes 1.8), through Pull Request #50164. Such pull request links to issue #50339, that, however, has a very brief description, and lacks the reasoning behind such a choice.
The linked issues are rich of technical details, and they have many comments. If you want to understand exactly how kubectl apply works, take a look!
Luckily, other issues provide way more context, as #26202: it turns out, the main problem with defaulting is when in subsequent updates to the resource, labels are mutated: the patch operation is somehow fickle, and apply breaks when you update a label that was used as default.
Many other concerns have been described by Brian Grant in deep in the issue #15894.
Basically, assuming a default value, creates many questions and concerns: what’s the difference between explicitly setting a label as null, and leaving it empty? How to manage all the cases where users left the default, and now they want to update the resource to manage themselves the label (or the other way around)?
Conclusion
Given that in K8s everything intend to be declarative, developers have chosen that explicit is better than implicit, especially for corner cases: specifying things explicitly allows a more robust validation on creation and update time, and remove some possible bugs that existed due to uncertainties caused by lack of clarity.
Shortly after dropping the defaulting behavior, developers also made the labels immutable, to make sure behaviors were well-defined. Maybe in the future labels will be mutable again, but to have that working, somebody needs to design a well-though document explaining how to manage all the possible edge cases that could happen when a controller is updated.
I hope you found this deep dive into the question interesting. I spent some time on it, since I was very curious, and I hope that the next person with the same question can find this article and get an answer quicker than what it took me.
If you have any question, or feedback, please leave a comment below, or write me an email at hello@rpadovani.com.
Da qualche giorno, e anche ieri sera, parlandone assieme al mio mentore Mauro IU0NDT, lui stesso mi ha consigliato di installare la versione 7 del node manager Supermon (la nuova versione 7.1 + l'ho trovata bella e pronta), che può coesistere come servizio web assieme all'altro gestore nodi precedente.
Pochi giorni fa è stata distribuita la versione Stabile 2.0.0 dell'App Android DVSwitch Mobile. In questi giorni ho fatto diverse prove utilizzando questa App Android sui miei nodi ASL e collegandone altri anche in modalità WT (Web Trasceiver).
Da qualche giorno è uscita la nuova versione del firmware in oggetto, nella quale sono state aggiunte diverse cose, che potrete leggere consultando il documento presente nel pacchetto della distribuzione firmware.
Helm is a remarkable piece of technology to manage your Kubernetes deployments, and used along Terraform is perfect for deploying following the GitOps strategy.
What’s GitOps? Great question! As this helpful, introductory article summarize, it is Infrastructure as Code, plus Merge Requests, plus Continuous Integration. Follow the link to explore further the concept.
However, Helm has a limitation: it doesn’t manage the lifecycle of Custom Resource Definitions (CRDs), meaning it will only install the CRD during the first installation of a chart. Subsequent chart upgrades will not add or remove CRDs, even if the CRDs have changed.
This can be a huge problem for a GitOps approach: having to update CRDs manually isn’t a great strategy, and makes it very hard to keep in sync with deployments and rollbacks.
For this very reason, I created a small Terraform module that will read from some online manifests of CRDs, and apply them. When parametrizing the version of the chart, it is simple to keep Helm Charts and CRDs in sync, without having to do anything manually.
Example
Karpenter is an incredible open-source Kubernetes node provisioner built by AWS. If you haven’t tried it yet, take some minutes to read about it.
Let’s use Karpenter as an example on how to use the module. We want to deploy the chart with the Helm provider, and we use this new Terraform module to manage its CRDs as well:
resource"helm_release""karpenter"{name="karpenter"namespace="karpenter"repository="https://charts.karpenter.sh"chart="karpenter"version=var.chart_version// ... All the other parameters necessary, skipping them here ...}module"karpenter-crds"{source="rpadovani/helm-crds/kubectl"version="0.1.0"crds_urls=["https://raw.githubusercontent.com/aws/karpenter/v${var.chart_version}/charts/karpenter/crds/karpenter.sh_provisioners.yaml","https://raw.githubusercontent.com/aws/karpenter/v${var.chart_version}/charts/karpenter/crds/karpenter.k8s.aws_awsnodetemplates.yaml"]}
As you can see, we parametrize the version of the chart, so we can be sure to have the same version for CRDs as the Helm chart. Behind the curtains, Terraform will download the raw file, and apply it with kubectl. Of course, the operator running Terraform needs to have enough permissions to launch such scripts, so you need to configure the kubectl provider.
The URLs must point to just the Kubernetes manifests, and this is why we use the raw version of the GitHub URL.
The source code of the module is available on GitHub, so you are welcome to chime in and open any issue: I will do my best to address problems and implement suggestions.
Conclusion
I use this module in production, and I am very satisfied with it: it brings under GitOps the last part I missed: the CRDs. Now, my only task when I install a new chart is finding all the CRDs, and build a URL that contains the chart version. Terraform will take care of the rest.
I hope this module can be useful to you as it is to me. If you have any question, or feedback, or if you would like some help, please leave a comment below, or write me an email at hello@rpadovani.com.
Contributing to any open-source project is a great way to spend a few hours each month. I started more than 10 years ago, and it has ultimately shaped my career in ways I couldn’t have imagined!
The new GitLab logo, just announced on the 27th April 2022.
Nowadays, my contributions focus mostly on GitLab, so you will see many references to it in this blog post, but the content is quite generalizable; I would like to share my experience to highlight why you should consider contributing to an open-source project.
Writing blog posts, tweeting, and helping foster the community are nice ways to contribute to a project ;-)
And contributing doesn’t mean only coding: there are countless ways to help an open-source project: translating it to different languages, reporting issues, designing new features, writing the documentation, offering support on forums and StackOverflow, and so on.
Before deep diving in this wall of text, be aware there are mainly three parts in this blog post, after this introduction: a context section, where I describe my personal experience with open source, and what does it mean to me. Then, a nice list of reasons to contribute to any project. In closing, there are some tips on how to start contributing, both in general, and something specific to GitLab.
Context
Ten years ago, I was fresh out of high school, without (almost) any knowledge about IT: however, I found out that I had a massive passion for it, so I enrolled in a Computer Engineering university (boring!), and I started contributing to Ubuntu (cool!). I began with the Italian Local Community, and I soon moved to Ubuntu Touch.
I often considered rewriting that old article: however I have a strange attachment to it as it is, with all that English mistakes. One of the first blog posts I have written, and it was really well received!
We all know how it ended, but it still has been a fantastic ride, with a lot of great moments: just take a look at the archive of this blog, and you can see the passion and the enthusiasm I had. I was so enthusiast, that I wrote a similar blog post to this one! I think it highlights really well some considerable differences 10 years make.
Back then, I wasn’t working, just studying, so I had a lot of spare time. My English was way worse. I was at the beginning of my journey in the computer world, and Ubuntu has ultimately shaped a big part of it. My knowledge was very limited, and I never worked before. Contributing to Ubuntu gave me a glimpse of real world, I met outstanding engineers who taught me a lot, and boosted my CV, helping me to land my first job.
Advocacy, as in this blog post, is a great way to contribute! You spread awareness, and this helps to find new contributors, and maybe inspiring some young student to try out!
Since then, I completed a master’s degree in C.S., worked in different companies in three different countries, and became a professional. Nowadays, my contributions to open source are more sporadic (adulthood, yay), but given how much it meant to me, I am still a big fan, and I try to contribute when I can, and how I can.
Why contributing
Friends
During my years contributing to open-source software, I’ve met countless incredible people, with some of whom I’ve become friend. In the old blog post I mentioned David: in the last 9 years we stayed in touch, met in different occasions in different cities: last time was as recent as last summer. Back at the time, he was a Manager in the Ubuntu Community Team at Canonical, and then, he became Director of Community Relations at GitLab. Small world!
The Ubuntu Touch Community Team in Malta, in 2014. It has been an incredible week, sponsored by Canonical!
One interesting thing is people contribute to open-source projects from their homes, all around the world: when I travel, I usually know somebody living in my destination city, so I’ve always at least one night booked for a beer with somebody I’ve met only online; it’s a pleasure to speak with people from different backgrounds, and having a glimpse in their life, all united by one common passion.
Fun
Having fun is important! You cannot spend your leisure time getting bored or annoyed: contributing to open source is fun ‘cause you pick the problems you would like to work on, and you don’t need all that bureaucracy and meetings that is often needed in your daily job. You can be challenged, and feeling useful, and improving a product, without any manager on your shoulder, and with your pace.
Being up-to-date on how things evolve
For example, the GitLab Handbook is a precious collection of resources, ideas, and methodologies on how to run a 1000 people company in a transparent, full remote, way. It’s a great reading, with a lot of wisdom.
Contributing to a project typically gives you an idea on how teams behind it work, which technologies they use, and which methodologies. Many open-source projects use bleeding-edge technologies, or draw a path. Being in contact with new ideas is a great way to know where the industry is headed, and what are the latest news: it is especially true if you hang out in the channels where the community meets, being them Discord, forums, or IRC (well, IRC is not really bleeding-edge, but it is fun).
Learning
When contributing in an area that doesn’t match your expertise, you always learn something new: reviews are usually precise and on point, and projects of a remarkable size commonly have a coaching team that help you to start contributing, and guide you on how to land your first patches.
Feel also free to ping me directly if you want some general guidance!
Giving back
I work as a Platform Engineer. My job is built on an incredible amount of open-source libraries, amazing FOSS services, and I basically have just to glue together different pieces. When I find some rough edge that could be improved, I try to do so.
Nowadays, I find crucial having well-maintained documentation, so after I have achieved something complex, I usually go back and try to improve the documentation, where lacking. It is my tiny way to say thanks, and giving back to a world that really has shaped my career.
This is also what mostly of my blogs posts are about: after having completed something on which I spent fatigue on, I find it nice being able to share such information. Every so often, I find myself years later following my guide, and I really also appreciate when other people find the content useful.
Swag
Who doesn’t like swag? :-) Numerous projects have delightful swags, starting from stickers, that they like to share with the whole community. Of course, it shouldn’t be your main driver, ‘cause you will soon notice that it is ultimately not worth the amount of time you spend contributing, but it is charming to have GitLab socks!
A GitLab branded mechanical keyboard, courtesy of the GitLab's security team! This very article has been typed with it!
Tips
I hope I inspired you to contribute to some open-source project (maybe GitLab!). Now, let’s talk about some small tricks on how to begin easily.
Find something you are passionate about
You must find a project you are passionate about, and that you use frequently. Looking forward to a release, knowing that your contributions will be included, it is a wonderful satisfaction, and can really push you to do more.
Moreover, if you already know the project you want to contribute to, you probably know already the biggest pain points, and where the project needs some contributions.
Start small and easy
You don’t need to do gigantic contributions to begin. Find something tiny, so you can get familiar with the project workflows, and how contributions are received.
Launchpad and bazaar instead of GitLab and git — down the memory lane!
My journey with Ubuntu started correcting a typo in a README, and here I am, years later, having contributed to dozens of projects, and having a career in the C.S. field. Back then, I really had no idea of what my future would have held.
For GitLab, you can take a look at the issues marked as “good for new contributors”. They are designed to be addressed quickly, and onboard new people in the community. In this way, you don’t have to focus on the difficulties of the task at hand, but you can easily explore how the community works.
Writing issues is a good start
Writing high-quality issues is a great way to start contributing: maintainers of a project are not always aware of how the software is used, and cannot be aware of all the issues. If you know that something could be improved, write it down: spend some time explicating what happens, what you expect, how to reproduce the problem, and maybe suggest some solutions as well! Perhaps, the first issue you write down could be the very first issue you resolve.
Not much time required!
Contributing to a project doesn’t require necessarily a lot of time. When I was younger, I definitely dedicated way more time to open-source projects, implementing gigantic features. Nowadays, I don’t do that anymore (life is much more than computers), but I like to think that my contributions are still useful. Still, I don’t spend more than a couple of hours a month, based on my schedule, and how much is raining (yep, in winter I definitely contribute more than in summer).
GitLab is super easy
Do you use GitLab? Then you should undoubtedly try to contribute to it. It is easy, it is fun, and there are many ways. Take a look at this guide, hang out on Gitter, and see you around. ;-)
Next week (9th-13th May 2022) there is also a GitLab Hackathon! It is a real fun and easy way to start contributing: many people are available to help you, there are video sessions talking about contributing, and just doing a small contribution you will receive a pretty prize.
And if I was able to do it with my few contributions, you can as well!
And in time, if you are consistent in your contributions, you can become a GitLab Hero! How cool is that?
I really hope this wall of text made you consider contributing to an open-source project. If you have any question, or feedback, or if you would like some help, please leave a comment below, or write me an email at hello@rpadovani.com.
Rust is all hot these days, and it is indeed a nice language to work with. In this blog post, I take a look at a small challenge: how to host private crates in the form of Git repositories, making them easily available both to developers and CI/CD systems.
Ferris the crab, unofficial mascot for Rust.
A Rust crate can be hosted in different places: on a public registry on crates.io, but also in a private Git repo hosted somewhere. In this latter case, there are some challenges on how to make the crate easily accessible to both engineers and CI/CD systems.
Developers usually authenticate through SSH keys: given humans are terrible at remembering long passwords, using SSH keys allow us to do not memorize credentials, and having authentication methods that just works. The security is quite high as well: each device has a unique key, and if a device gets compromised, deactivating the related key solves the problem.
On the other hand, it is better avoiding using SSH keys for CI/CD systems: such systems are highly volatile, with dozens, if not hundreds, instances that get created and destroyed hourly. For them, a short-living token is a way better choice. Creating and revoking SSH keys on the fly can be tedious and error-prone.
This arises a challenge with Rust crates: if hosted on a private repository, the dependency can be reachable through SSH, such as
In the future, I hope we don’t need to host private crates on Git repositories, GitLab should add a native implementation of a private registry for crates.
The former is really useful and simple for engineers: authentication is the same as always, so no need to worry about it. However, it is awful for CI/CD: now there is need to manage the lifecycle of SSH keys for automatic systems.
The latter is awful for engineers: they need a new additional authentication method, slowing them down, and of course there will be authentication problems. On the other hand, it is great for automatic systems.
How to conciliate the two worlds?
Well, let’s use them both! In the Cargo.toml file, use the SSH protocol, so developers can simply clone the main repo, and they will be able to clone the dependencies without further hassle.
Then, configure the CI/CD system to clone every dependency through HTTPS, thanks to a neat feature of Git itself: insteadOf.
url.<base>.insteadOf:
Any URL that starts with this value will be rewritten to start, instead, with . In cases where some site serves a large number of repositories, and serves them with multiple access methods, and some users need to use different access methods, this feature allows people to specify any of the equivalent URLs and have Git automatically rewrite the URL to the best alternative for the particular user, even for a never-before-seen repository on the site. When more than one insteadOf strings match a given URL, the longest match is used.
Basically, it allows rewriting part of the URL automatically. In this way, it is easy to change the protocol used: developers will be happy, and the security team won’t have to scream about long-lived credentials on CI/CD systems.
The CI_JOB_TOKEN is a unique token valid only for the duration of the GitLab pipeline. In this way, also if a machine got compromised, or logs leaked, the code is still sound and safe.
What do you think about Rust? If you use it, have you integrated it with your CI/CD systems? Share your thoughts in the comments below, or drop me an email at riccardo@rpadovani.com.
AWS EKS is a remarkable product: it manages Kubernetes for you, letting you focussing on creating and deploying applications. However, if you want to manage permissions accordingly to the shared responsibility model, you are in for some wild rides.
First, what’s the shared responsibility model? Well, to design a well-architected application, AWS suggests following six pillars. Among these six pillars, one is security. Security includes sharing responsibility between AWS and the customer. In particular, and I quote,
Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.
Beautiful, isn’t it? AWS gives us a powerful tool, IAM, to manage permissions; we have to configure things in the best way, and AWS gives us the way to do so. Or does it? Let’s take a look together.
Our goal
I would say the goal is simple, but since we are talking about Kubernetes, things cannot be just simple.
Our goal is quite straightforward: setting up a Kubernetes cluster for our developers. Given that AWS offers AWS EKS, a managed Kubernetes service, we only need to configure it properly, and we are done. Of course, we will follow best practices to do so.
A proper setup
Infrastructure as code is out of scope for this post, but if you have never heard about it before, I strongly suggest taking a look into it.
Of course, we don’t use the AWS console to manually configure stuff, but Infrastructure as Code: basically, we will write some code that will call the AWS APIs on our behalf to set up AWS EKS and everything correlated. In this way, we can have a reproducible setup that we could deploy in multiple environments, and countless other advantages.
Moreover, we want to avoid launching scripts that interact with our infrastructure from our PC: we prefer not to have permissions to destroy important stuff! Separation of concerns is a fundamental, and we want to write code without worrying about having consequences on the real world. All our code should be vetted from somebody else through a merge request, and after being approved and merged to our main branch, a runner will pick it up and apply the changes.
A runner is any CI/CD that will execute the code on your behalf. In my case, it is a GitLab runner, but it could be any continuous integration system.
We are at the core of the problem: our runner should follow the principle of least privilege: it should be able to do only what it needs to do, and nothing more. This is why we will create a IAM role only for it, with only the permissions to manage our EKS cluster and everything in it, but nothing more.
I would have a massive rant about how bad is the AWS documentation for IAM in general, not only for EKS, but I will leave it to some other day.
The first part of creating a role with minimum privileges is, well, understanding what minimum means in our case. A starting point is the AWS documentation: unfortunately, it is always a bad starting point concerning IAM permissions, ‘cause it is always too generous in allowing permissions.
The “minimum” permission accordingly to AWS
According to the guide, the minimum permissions necessary for managing a cluster is being able to do any action on any EKS resource. A bit farfetched, isn’t it?
Okay, hardening this will be fun, but hey, do not let bad documentations get in the way of a proper security posture.
You know what will get in the way? Bugs! A ton of bugs, with absolutely useless error messages.
I started limiting access to only the namespace of the EKS cluster I wanted to create. I ingenuously thought that we could simply limit access to the resources belonging to the cluster. But, oh boy, I was mistaken!
Looking at the documentation for IAM resources and actions, I created this policy:
Unfortunately, if a role with these permissions try to create a cluster, this error message appears:
Error: error creating EKS Add-On (my-cluster:kube-proxy): AccessDeniedException: User: arn:aws:sts::123412341234:assumed-role/<role>/<iam-user> is not authorized to perform: eks:TagResource on resource: arn:aws:eks:eu-central-1:123412341234:/createaddon
I have to say that at least the error message gives you a hint: the /createddon action is not scoped to the cluster.
After fighting with different polices for a while, I asked DuckDuckGo for a help, and indeed somebody reported this problem to AWS before, in this GitHub issue.
What the issue basically says is that if we want to give an IAM role permission to manage an add-on inside a cluster, we must give it permissions over all the EKS add-ons in our AWS account.
This of course breaks the AWS shared responsibility principle, ‘cause they don’t give us the tools to upheld our part of the deal. This is why it is a real and urgent issue, as they also mention in the ticket:
Can’t share a timeline in this forum, but it’s a high priority item.
And indeed it is so high priority, that it has been reported the 3rd December 2020, and today, more than one year later, the issue is still there.
To add insult to the injury, you have to write the right policy manually because if you use the IAM interface to select “Any resource” for the add-ons as in the screenshot below, it will generate the wrong policy! If you check carefully, the generated resource name is arn:aws:eks:eu-central-1:123412341234:addon/*/*/*, which of course doesn’t match the ARN expected by AWS EKS. Basically, also if you are far too permissive, and you use the tools that AWS provides you, you still will have some broken policy.
Do you have some horror story about IAM yourself? I have a lot of them, and I am thinking about a more general post. What do you think? Share your thoughts in the comments below, or drop me an email at riccardo@rpadovani.com.
Terraform is a powerful tool. However, it has some limitations: since it uses AWS APIs, it doesn’t have a native way to check if an EC2 instance has completed to run cloud-init before marking it as ready. A possible workaround is asking Terraform to SSH on the instance, and wait until it is able to perform a connection before marking the instance as ready.
Terraform logo, courtesy of HashiCorp.
I find using SSH in Terraform quite problematic: you need to distribute a private SSH key to anybody that will launch the Terraform script, including your CI/CD system. This is a no-go for me: it adds the complexity to manage SSH keys, including their rotation. There is a huge issue on the Terraform repo on GitHub about this functionality, and the most voted solution is indeed connecting via SSH to run a check:
provisioner"remote-exec"{inline=["cloud-init status --wait"]}
AWS Systems Manager Run Command
The idea of using cloud-init status --wait is indeed quite good. The only problem is how do we ask Terraform to run such command. Luckily for us, AWS has a service, AWS SSM Run Command that allow us to run commands on an EC2 instance through AWS APIs! In this way, our CI/CD system needs only an appropriate IAM role, and a way to invoke AWS APIs. I use the AWS CLI in the examples below, but you can adapt them to any language you prefer.
Prerequisites
If you don’t know AWS SSM yet, go and take a look to their introductory guide.
There are some prerequisites to use AWS SSM Run Command: we need to have AWS SSM Agent installed on our instance. It is preinstalled on Amazon Linux 2 and Ubuntu 16.04, 18.04, and 20.04. For any other OS, we need to install it manually: it is supported on Linux, macOS, and Windows.
The user or the role that executes the Terraform code needs to be able to create, update, and read AWS SSM Documents, and run SSM commands. A possible policy could be look like this:
If we already know the name of the documents, or the instances where we want to run the commands, it is better to lock down the policy specifying the resources, accordingly to the principle of least privilege.
Last but not least, we need to have the AWS CLI installed on the system that will execute Terraform.
The Terraform code
After having set up the prerequisites as above, we need two different Terraform resources. The first will create the AWS SSM Document with the command we want to execute on the instance. The second one will execute such command while provisioning the EC2 instance.
The AWS SSM Document code will look like this:
resource"aws_ssm_document""cloud_init_wait"{name="cloud-init-wait"document_type="Command"document_format="YAML"content=<<-DOC
schemaVersion: '2.2'
description: Wait for cloud init to finish
mainSteps:
- action: aws:runShellScript
name: StopOnLinux
precondition:
StringEquals:
- platformType
- Linux
inputs:
runCommand:
- cloud-init status --wait
DOC
}
We can refer such document from within our EC2 instance resource, with a local provisioner:
resource"aws_instance""example"{ami="my-ami"instance_type="t3.micro"provisioner"local-exec"{interpreter=["/bin/bash","-c"]command=<<-EOF
set -Ee -o pipefail
export AWS_DEFAULT_REGION=${data.aws_region.current.name}
command_id=$(aws ssm send-command --document-name ${aws_ssm_document.cloud_init_wait.arn} --instance-ids ${self.id} --output text --query "Command.CommandId")
if ! aws ssm wait command-executed --command-id $command_id --instance-id ${self.id}; then
echo "Failed to start services on instance ${self.id}!";
echo "stdout:";
aws ssm get-command-invocation --command-id $command_id --instance-id ${self.id} --query StandardOutputContent;
echo "stderr:";
aws ssm get-command-invocation --command-id $command_id --instance-id ${self.id} --query StandardErrorContent;
exit 1;
fi;
echo "Services started successfully on the new instance with id ${self.id}!"
EOF
}}
From now on, Terraform will wait for cloud-init to complete before marking the instance ready.
Conclusion
AWS Session Manager, AWS Run Commands, and the others tools in the AWS Systems Manager family are quite powerful, and in my experience they are not widely use. I find them extremely useful: for example, they also allows connecting via SSH to the instances without having any port open, included the 22! Basically, they allow managing and running commands inside instances only through AWS APIs, with a lot of benefits, as they explain:
Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to your managed instances.
DO you have any questions, feedback, critics, request for support? Leave a comment below, or drop me an email at riccardo@rpadovani.com.
After years of blogging, I’ve finally chosen to add a comment system, including reactions, to this blog. I’ve done so to make it easier engaging with the four readers of my blabbering: of course, it took some time to choose the right comment provider, but finally, here we are!
A picture I took at the CCC 2015 - and the moderation policy for comments!
I’ve chosen, long ago, to do not put any client-side analytics on this website: while the nerd in me loves graph and numbers, I don’t think they are worthy of the loss of privacy and performance for the readers. However, I am curious to have feedback on the content I write, if some of them are useful, and how can I improve. In all these years, I’ve received different emails about some posts, and they are heart-warming. With comments, I hope to reduce the friction in communicating with me and having some meaningful interaction with you.
Enter hyvor.com
Looking for a comment system, I had three requirements in mind: being privacy-friendly, being performant, and being managed by somebody else.
A lot of comments system are “free”, because they live on advertisements, or tracking user activities, and more often a combination of both. However, since I want comments on my website, I find dishonest that you, the user, has to pay the price for it. So, I was looking for something that didn’t track users, and bases its business on dear old money. My website, my wish, my wallet.
I find performances important, and unfortunately, quite undervalued on the bloated Internet. I like having just some HTML, some CSS, and the minimum necessary of JavaScript, so whoever stumbles on these pages don’t waste CPU and time waiting for some rendering. Being a static website, there isn’t a server side, so I cannot put comments there. I had to find a really light JavaScript based comment system.
Given these two prerequisites, you would say the answer is obvious: find a comment system that you can self-host! And you would be perfectly right - however, since I spend already 8 hours a day keeping stuff online, I really don’t want to have to care about performance and uptime in my free time - I definitely prefer going drinking a beer with a friend.
After some shopping around, I’ve chosen to go with Hyvor Talk, since it checks all three requirements above. I’ve read nice things about it, so let’s see how it goes! And if you don’t see comments at the end of the page, probably a privacy plugin for your browser is blocking it - up to you if you want to whitelist my website, or communicate with me in other ways ;-)
A nice plus of Hyvor is they also support reactions, so if you are in an hurry but want still to leave a quick feedback on the post, you can simply click a button. Fancy, isn’t it?
Moderation
Internet can be ugly sometime, and this is why I will keep a strict eye on the comments, and I will probably adjust moderation settings in the future, based on how things evolve - maybe no-one will comment, then no need for any strict moderation! The only rule I ask you to abide, and I’ve put as moderation policy, is: “Be excellent to each other”. I’ve read it at the CCC Camp 2015, and it sticks with me: as every short sentence, cannot capture all the nuances of human interaction, but I think it is a very solid starting point. If you have any concern or feedback you prefer to do not express in public, feel free to reach me through email. Otherwise, I hope to see you all in the comment section ;-)
Questions, comments, feedback, critics, suggestions on how to improve my English? Leave a comment below, or drop me an email at riccardo@rpadovani.com. Or, from today, leave a comment below!
“Build smaller, faster, and more secure desktop applications with a web frontend” is the promise made by Tauri. And indeed, it is a great Electron replacement. But being in its first days (the beta has just been released!) a bit of documentation is still missing, and on the internet there aren’t many examples on how to write code.
Haven’t you started with Tauri yet? Go and read the intro doc!
One thing that took me a bit of time to figure out, is how to read environment variables from the frontend of the application (JavaScript / Typescript or whichever framework you are using).
The Command
As usual, when you know the solution the problem seems easy. In this case, we just need an old trick: delegating!
In particular, we will use the Command API to spawn a sub-process (printenv) and reading the returned value. The code is quite straightforward, and it is synchronous:
The example is in Typescript, but can be easily transformed in standard JavaScript.
The Command API is quite powerful, so you can read its documentation to adapt the code to your needs.
Requirements
There are another couple of things you should consider: first, you need to have installed in your frontend environment the @tauri-apps/api package. You also need to enable your app to execute commands. Since Tauri puts a strong emphasis on the security (and rightly so!), your app is by default sandboxed. To being able to use the Command API, you need to enable the shell.execute API.
It should be as easy as setting tauri.allowlist.shell.execute to true in your tauri.json config file.
Tauri is very nice, and I really hope it will conquer the desktops and replace Electron, since it is lighter, faster, and safer!
Questions, comments, feedback, critics, suggestions on how to improve my English? Leave a comment below, or drop me an email at riccardo@rpadovani.com.
Want to quit from a conversation with a bot provided through Dialogflow? You can use an Italian city, too.
Imagine that you go to the registry office.
The employee has to ask different questions, like your name, birthday, and so on.
He just asked the first one, you expect a lot more questions.
Second one
BOT: “What’s your first name?”
USER: “John”
BOT: “Sorry to hear that. Bye. Next, please!”
This is what I am going to tell you in this story, where our main character is not the employee but Google Dialogflow.
BOT: Where are you from?
USER: New York City
BOT: Sorry to hear this, bye!
Wait, what?
Imagine that a bot is asking a simple question “where are you from?”. You insert the correct value and BOOOM. The bot suddenly closes the conversation!
We use Google Dialogflow to provide a rich conversation experience to our customers, driving them through different Intents according to some data that we need them to submit and/or modify.
google.com – Intents
To give context to someone not used to Dialogflow, an Intent categorizes an end-user’s intention for one conversation turn. For each agent, you define many intents, where your combined Intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression, Dialogflow matches the end-user expression to the best intent in your agent.
When an intent is matched at runtime, Dialogflow provides the extracted values from the end-user expression as parameters.
Each parameter is described by different attributes, forming a parameter configuration. For this article, let’s say that we are interested at some point in the conversation in a parameter called city that has an Entity type @sys.geo-city.
That’s it. We ask for a parameter called city and we expect our user to write a city in his response:
“Los Angeles”
“I live in Los Angeles”
“I was in Los Angeles”
…
are all valid sentences for us: Los Angeles is extracted from the sentence and assigned to the parameter city.
Let’s finish this paragraph with the definition of Context:Dialogflow contexts are similar to natural language context. If a person says to you “they are orange”, you need context in order to understand what the person is referring to.
Conversation exits
Dialogflow has different features automatically provided that help you to create a rich conversation without having to reinvent the wheel.
It’s impossible to avoid that this behavior happens, you can just apply some custom logic and reply one last time with a custom response.
Intent, parameters, conversation exits: a recap.
So, we are in a specific Intent, asking the user to give us a city name for our city parameter. At the same time, we know that some magic words will stop the conversation.
The Italian city that stops the conversation
Now we have all the elements to introduce the problem described in the article title. Our clients are from Italy, so the chat is in Italian, and 99.99% of the time the user is going to indicate an Italian city to our question.
One day, we had a support request from one of our clients: “When I insert the city where I live, the system replies with “OK, canceled. Goodbye.”.
-.-‘.
Open the logs.
Open the Dialogflow console.
Check the history.
Oh, wait, here it is! The city is called Fermo, and “Fermo!” means “Stop!”.
BOT: Can you please tell me where XYZ happened?
USER: Fermo
BOT: Sure, goodbye.
The user would be something like -.-‘ .
What the Dialogflow Support team told us about it
We tried to reach out to the Dialogflow support team.
This is the most important part of the email:
Conversational exits for Actions on Google(AoG) are implemented on the AoG app’s side and cannot be overridden by Dialogflow.
Unfortunately, our team is unable to assist as your question is more closely related to Actions on Google
What the Actions on Google Support Team told us about it
We can understand that the word “Fermo” is in reference to a city. However, it is also a system level hotword. Unfortunately there is no solution to bypass the system limitation. Please consider maybe using Fermo city / town or something along those lines, however that may not be guaranteed since it is a system hotword.
Read it again:
there is no solution to bypass the system limitation
We created a list, that right now has only Fermo in it, of potentially dangerous cities.
When the user provides a single word response and we are in an Intent where we are asking for a city to extract a value for our @sys.geo-city parameter, we check if this word is in the list.
If so, we wrap the city in a context: “I live in <word>” before sending the response to Dialogflow.
There is no guarantee that this works, according to AoG support team response, but currently, it seems to work just fine.
The Vision API now supports online (synchronous) small batch annotation (PDF/TIFF/GIF) for all features. To do so, the relevant documentation is Small batch file annotation online.
Let’s see how can we do this with PHP.
Context
Having PHP >= 7.4, the packages to require are:
google/cloud-vision
google/cloud-storage
Code
How to upload the file in the storage
Soon.
Text detection
Even with PDFs we are going to use ImageAnnotatorClient, the service that performs Google Cloud Vision API detection tasks over client images and returns detected entities from the images.
$path = "gs://mystorage.com/path/to/my/file.pdf";
/* If you have it, you can give an hint about the language in the doc */
$context = new ImageContext();
$context->setLanguageHints(['it']);
/* Here's the annotator described before */
$imageAnnotator = new ImageAnnotatorClient();
/* We create an AnnotateFileRequest instance to annotate one single file */
$file_request = new AnnotateFileRequest();
/* We express our input file in terms of a GcsSource
instance the represents the Google Cloud Storage location */
$gcs_source = (new GcsSource())
->setUri($path);
/* Let's specify the feature we need. You can find the options below */
$feature = (new Feature())
->setType(Type::DOCUMENT_TEXT_DETECTION);
/* Let's specify the file info: a PDF in that location */
$input_config = (new InputConfig())
->setMimeType('application/pdf')
->setGcsSource($gcs_source);
/* Some configurations, including the pages of the file to perform image annotation. */
$file_request = $file_request->setInputConfig($input_config)
->setFeatures([$feature])
->setPages([1]);
/* Annotate the files and get the responses making the synchronous batch request. */
$result = $imageAnnotator->batchAnnotateFiles([$file_request]);
/* We take the first result, because that's 1 page only. */
$res = $result->getResponses();
$offset = $res->offsetGet(0);
$responses = $offset->getResponses();
$res = $responses[0];
/* Finally!!! The annotations! */
$annotations = $res->getFullTextAnnotation();
/* Clean up resources such as threads */
$imageAnnotator->close();
Features
In your request you can set the type of annotation you want to perform on the file. You can check the reference or the features list documentation.
Semplice! Lo scorso Gennaio il wiki ha superato il traguardo delle 700 guide fra pagine create e ideate da Giugno 2011 a oggi 😊️
L'onore di tagliare tale traguardo è andato (anche giustamente) al nostro "uragano" wilecoyote, che arrivato da appena un anno ha messo in campo un impegno incredibile, rivelatosi provvidenziale per il proseguo del gruppo.
Tabella & conteggi
E andiamo per difetto, perché la nostra tabella non conteggia la miriade di piccole modifiche quotidiane o i ciclici e corposi lavori di routine fatti a ogni rilascio. Di fatto la pagina esiste per tenere facilmente traccia dei lavori svolti, il conteggio è più uno sfizio.
Già... il conteggio. Siamo pur sempre in ambito informatico e vi sarete chiesti quale stack utilizzassimo per effettuare questi calcoli statistici. Beh, cosa potremmo mai utilizzare se non lui... gnome-calculator! Implacabile e infallibile nell'eseguire somme su numeri naturali 😎️
Un grazie e prossimo traguardo...
Ma torniamo al wiki e soprattutto un pensiero a tutto il parco utenti. Da chi si è accorto e ha corretto piccoli errori a chi si è scervellato a scrivere guide complesse: un grazie di cuore all'impegno di tutti!
A Giugno ci sarà il decennale di questo ormai non più nuovo corso e sarà l'occasione per ripercorrere le tappe e le evoluzioni che bene o male, nonostante molte difficoltà, ci hanno permesso di arrivare qui e poter gridare: 700! 😀️
Ad Aprile cesserà il supporto per Ubuntu 16.04.
Questo vuol dire che nella sezione contenente i resoconti delle installazioni di Ubuntu sui computer portatili, molti dei link alle guide passeranno nella colonna Guide da aggiornare.
Fortunatamente i computer che funzionano con Ubuntu 16.04 supportano anche le versioni successive, come Ubuntu 18.04 e 20.04 (nella maggior parte dei casi).
Sul tuo portatile hai continuato a installare versioni successive alla 16.04? Aggiornare la relativa pagina sarà davvero semplice
Come aggiornare la pagina?
Quando si accede all'editor per modificare una pagina, si noterà nella parte in alto la macro Informazioni che si presenta in questo modo: <<Informazioni(forum="..."; rilasci="15.10 16.04";)>>
Al suo interno è presente la voce rilasci che contiene tra virgolette le versioni con le quali la guida è stata testata. Bene, non occorre far altro che aggiungere il numero di versione di uno dei rilasci di Ubuntu attualmente supportati.
Pertanto, supponendo di aver testato con successo la guida con... ad esempio la 20.04, basta aggiungere il numero all'interno della macro che diviene: <<Informazioni(forum="..."; rilasci="15.10 16.04 20.04";)>>
Niente di che, vero?
In generale è buona cosa...
Il discorso può estendersi anche alle pagine già presenti nella colonna delle guide da aggiornare e, ancora più in generale, a qualsiasi tipo di guida presente nel wiki. Come vedi se si incappa in una pagina testata con una versione obsoleta di Ubuntu, per confermarne la validità con una versione supportata è una questione di attimi.
Our goal is to have a look at the mobile app development technology for startups in 2021, in order to publish an app for Android and an app for iOS with the perfect trade-off between quality and time+resources. How do we choose the best technology to accomplish this task?
First of all, the most accurate answer is:
it depends.
Every startup has different needs, in terms of business, technology, resources (how many developers? and how many of them are senior?) and deadlines.
Assumptions
There are 200000 articles out there about this subject.
This is why I want to list some boundaries that will “limit” our research. So this is not a general article Native vs Cross platform but is a study for a specific case.
That said, let’s start with these assumptions:
The start-up technological stack has a very strong connection with Google, mostly with Firebase. This is a very important aspect. In the start-up X the most important technology could be Y, so adjust the content of the article accordingly;
The start-up was providing all the services in web pages only, and the current app is basically a Webview showing an ad-hoc version of the website. I know, I know! Anyway, there is an ongoing process to transform all the services into a full set of (modularised) REST APIs that we can then consume in our app;
The app does not have heavy processes in terms of device resources, does not do any image/video processing and so on. Anyway, it has to have reliable access to the camera and retrieve the accurate users position through GPS;
The current technology is React Native, so we already saw both the pros and cons of a not-native approach. But not-native includes a world of solutions, that we will try to consider in this article;
The strong connection between React and Firebase, to build business web applications in JavaScript with all the power of the Firebase services, is very well explained in a book that I strongly suggest;
last but not least, I am not vertical on mobile technologies, so your feedbacks are very welcome.
What’s the study about and how do we try to understand if a choice is better than another one
We will briefly describe pros and cons of every technology. Then we will compare them at high level and a bit in terms of performances.
At this point we will take into account other important factors:
Who’s behind them?
The community around each technology;
Available libs: active development, time to “port” new features, stability, support and community;
Trends: we don’t want to adopt a technology that is going to become marginal in the next few years;
Very few observations related to the UI/UX aspect.
One by one
100% Native
Basically, we have 2 separated projects here: one for Android, one for iOS.
Thanks to the respective IDE (Android Studio and XCode) you will probably be able to build a good skeleton of your app and a good enough flow between the views with no or few code required, with a “drag & drop” approach for elements (buttons, text fields, …). A very good preview is also available, without using a simulator, for example.
That’s great. Anyway, all the logic such as performing API calls, dynamically adding/removing content, handling Push notifications, accessing GPS position, accessing camera/gallery and so on, are performed with a platform-specific language: Swift (Objective-C) for iOS, Kotlin (Java) for Android.
All of this just to say that the two apps will live on two separated, parallel projects. You can of course merge some high-level logic regarding how to manage certain flows, but at the end you have to code them separately.
Two separated projects, two separated teams with separated skills. In the worst-case scenario this could double the needed resources.
On the other side you have the maximum reliability when you are going to handle device resources such as the camera or the GPS.
You have all the debugging and profiling power you need in your IDE.
Even regarding 3rd-party libraries (of course here a lot depends on the library itself) you can have less surprises if you are using the specific library for the given platform.
– Firebase
In case of Firebase, an Android native solution is incredibly good, of course, due to the fact that we’re talking about Google solutions.
Also Firebase has an official SDK for iOS, so the native solution is incredibly good for iOS, too.
Hybrid-web
We have just one project here: you will code an HTML/JS/CSS solution, that’s it. All the UI is shared between the platforms, so you have a very consistent UI and you don’t need to have different components for different platforms. You can, or you can simply have a different CSS applied for each platform.
For debugging purposes you have different layers to take care of, but most of the job related to the UI can be done in a web-based debugging tool like the Chrome Developer tools. One place, different platforms.
So, we were saying that we have just one project here, and this is something to carefully think about if you are a start-up. Additionally, you can “re-use” web related skills in your team to build your app, substantially decreasing the resources needed to complete it.
How about the device resources?
For the most common tasks such as taking a picture or accessing the GPS location, you can be sure to find a plugin (official or not) that takes care of these tasks for you.
The main idea is that there is a sort of Javascript API system to communicate with the underlying platform.
Ionic
With Ionic you can be sure to have someone in your team that can build a simple app with a small learning curve, due to the fact that it supports the major Javascript frameworks and libraries such as React, Angular and Vue.
It has 120 native device plugins for Camera, GPS, Bluetooth and so on.
It relies on Capacitor or Cordova to execute your web components within wrappers targeted to each platform, with API bindings to access device’s capabilities.
This is the architecture compared to the native one:
This plugin page has a build: failing tag on it (3rd of January 2021): no good.
Let’s see the popularity of the plugin:
Watch
68
Stars
993
source: plugin page on GitHub, 3rd of January 2021
Anyway, if your choice is Ionic and you want to continue your journey with React, I strongly suggest this book to give you very useful hints on how to build robust web applications with React and Firebase.
For this part, that’s it! In the next part we will focus on the Hybrid-Native solutions.
When performing your POST requesto to https://fcm.googleapis.com/fcm/send please remember to add a Content-type: application/json header and a Authorization header with value key=<server_key> that you can retrieve from the “Cloud Messaging” section of your project settings in the Firebase console.
Il 22 ottobre è stato rilasciato Ubuntu 20.10, nome in codice Groovy Gorilla, supportato fino a luglio 2021.
Come di consueto il Gruppo Documentazione ha organizzato le attività di revisione di alcune pagine wiki .
Molto lavoro rimane ancora da fare: tante guide hanno bisogno di essere aggiornate e verificate con la nuova versione 20.10. Per continuare la revisione della documentazione abbiamo bisogno del tuo aiuto!
Come contribuire
Partecipare alla redazione e all'aggiornamento della documentazione wiki di Ubuntu-it è piuttosto semplice.
Numerose guide contengono sotto l'indice una dicitura simile a questa: «Guida verificata con Ubuntu: 16.04 18.04 20.04».
Se una guida è valida anche per Ubuntu 20.10 ma l'informazione non è riportata sotto l'indice della pagina, è sufficiente aggiungere la verifica per il rilascio 20.10.
Se una guida contiene istruzioni non valide con Ubuntu 20.10 puoi aggiornarla per adeguare in contenuti al nuovo rilascio.
Come sempre se trovi nomi sbagliati, errori ortografici, collegamenti errati o altre inesattezze facili da modificare non aspettare e correggili senza timore!
$session = $this->_getSessionName($sessionId);
$sessionsClient = new SessionsClient();
$response = $sessionsClient->detectIntent($session, $queryInput);
// WHERE _getSessionName() is:
private function _getSessionName($sessionId)
{
$projectId = $this->_getProjectId();
$environment = $this->_getEnvironment();
return "projects/{$projectId}/agent/environments/{$environment}/users/-/sessions/{$sessionId}";
}
And that’s it.
In the Dialogflow History tab there is still no way to filter conversations by Environment, but there is an Environment indicator in the conversation Detail.
You can get 2 free months of Skillshare Premium, thanks to this referral link.
With Skillshare Premium you will:
Get customized class recommendations based on your interests.
Build a portfolio of projects that showcases your skills.
Watch bite-sized classes on your own schedule, anytime, anywhere on desktop or mobile app.
Ask questions, exchange feedback, and learn alongside other students.
That’s it!
Get inspired.
Learn new skills.
Make discoveries.
Be curious.
I will be more than happy if you can join my classes and give me some feedbacks. The classes are about almost everything that I do in my life (you can check again my skills on the homepage)!..
You woke up one day and your Android phone contacts disappeared.
You try to perform a Google Search but you only find strange solutions, mostly related to some Samsung stuff, factory resets, uninstalling some app, etc…
Suddenly you realise that your Google Contacts was empty!
Dalla TV, Bobo Vieri sorride sornione come quando giocava a calcio, e ripete "Shave like a bomber!". Tutti gli uomini che se lo ricordano quando giocava a calcio e faceva parlare di se dentro e fuori dal campo, rispondono al sorriso con approvazione.
Quello che fa (bene) Vieri è il testimonial, ci mette la sua bella faccia, e la sua innata simpatia per vendere un prodotto. Una professione a cui si dedicano anche suoi numerosi ex-colleghi giocatori di calcio, e del mondo dello spettacolo.
Mi fa sorridere quindi quando leggo post così che parlando di Chiara Ferragni scrivono di "vuoto pneumatico". Mi pare lapalissiano dire che Chiara Ferragni è una testimonial di se stessa, che sa usare i nuovi canali per comunicare il suo messaggio (qualunque esso sia). Fa marketing, e lo fa bene, visto il giro di soldi che fattura. Nel suo caso poi il fenomeno è così esteso che, chi scrive di lei, o parla di lei, anche in maniera negativa, si fa pubblicità a sua volta, sfruttandone la fama di riflesso.
Denoto in questi articoli - oltre la malizia usata per attirare clic - una certa incapacità di leggere la realtà e di adattarsi al cambiamento di costumi e di mestieri in atto.
Trovo anche sorprendente che i commentatori di cui sopra non riescano a capire come il fenomeno dei testimonial sia in parte sceso dall'Olimpo delle celebrità televisive, per diramarsi nei mille rivoli dei rappresentanti della borghesia. Al giorno d'oggi i testimonial sono anche degli illustri sconosciuti, con cui magari abbiamo fatto una corsetta al parco domenica scorsa.
🇬🇧🇺🇸 Last october I attended Ubucon Europe 2019, at Sintra in Portugal. Still need some time to gather the right words to explain how well I felt standing there with Ubuntu mates, and how many different good talks I saw. Meanwhile, you can take a look at the video of my talk, where I speak about my running story: how I began running, why I still run, and why (almost) everybody can do it. Oh! Obviously I explain also what open source software you can use to track safely and securely your runs. Let me know what you think about it! :-)
🇮🇹 Lo scorso ottobre ho partecipato a Ubucon Europe 2019, a Sintra in Portogallo. Ho bisogno di ancora un po' di tempo per mettere insieme le parole giuste per spiegare come mi sono sentito stare lì con gli amici ubunteri, e quanti bei talk ho sentito. Intanto, potete dare un'occhiata al video del mio talk, dove parlo della mia storia di corsa: come ho cominciato, perché corro ancora e perché (quasi) tutti possono farlo. Oh! Ovviamente spiego anche che software open source si può usare per tracciare in le proprie corse in modo sicuro. Fatemi sapere cosa ne pensate! :-)
Thanks very much Ubuntu & Canonical for funding my travel expenses!
In occasione della visita in Italia, ci sono state parecchie critiche negative su Greta Thumberg, la ragazzina svedese che ha lasciato la scuola per una sua personale battaglia a favore dell'ambiente, e sulla sua azione per contrastare il cambiamento climatico. Ci sono stati anche attacchi alla sua persona, tanto infondati quanto ignobili, specie se si considera la sua giovane età.
La cosa che sfugge a chi è abituato a leggere solo i titoli dei giornali, e in base a questi farsi un'opinione, è che le aziende petrolifere spendono milioni di dollari per spargere fake news, finanziare politici e rifarsi un'immagine.
Cioè: mentre io sto lì a separare la finestrella di plastica dal resto della busta di carta prima di buttare entrambe nella rispettiva raccolta differenzia "le grandi compagnie petrolifere, benché ufficialmente sostenitrici della lotta ai cambiamenti climatici, operano in realtà nell’ombra al fine di conservare i loro business". Senza contare "le grandi banche internazionali hanno versato 1.900 miliardi di dollari al comparto delle fonti fossili (gas e carbone inclusi)".
Se dopo tutto questo restate indifferenti, se non ve ne frega assolutamente niente che il pianeta Terra sia sempre più invivibile, se pensate che i nostri figli e nipoti che si troveranno a gestirlo in qualche maniera si arrangeranno, dovreste pensare a cosa tenere di più caro (i soldi!) e valutare la possibilità che continuare a finanziare - 100 miliardi di dollari all'anno, di cui 17 miliardi dal Governo Italiano - le fonti di energia fossili è una pessima idea per il proprio portafoglio.
Tediato oltre misura dall'invasiva pubblicità su Twitter del noto marchio di diavolerie elettroniche, che intende rifarsi la verginità sulla privacy a forza di video emotivamente pucciosi, ho risposto a tono a un suo tweet. Sono stato subito bloccato.
Morale della favola: mi sarà impossibile vedere le prossime pubblicità ipocrite di Apple. Grazie Apple!
Il wiki di Ubuntu-it è stato aggiornato per ottimizzare la lettura dei testi da smartphone
Le modifiche sono già attive da un paio di settimane, ma è stato necessario fare alcuni aggiustamenti per renderle davvero efficaci. Potrebbe essere necessario effettuare un refresh o ripulire la cache del browser per visualizzare la nuova grafica; a parte questo potrete leggere la documentazione, la newsletter e tutte le altre pagine del wiki da smartphone senza ulteriori problemi.
Al momento la modifica entra in funzione solo quando la larghezza della finestra è a misura di smartphone, nella cosiddetta modalità "portrait". In tal caso apparirà il nuovo header semplificato con il classico pulsante per il menù a scomparsa. Si noterà che in questa modalità non sono presenti i link di login/modifica/ecc.
Ruotando lo smartphone (modalità "landscape"), si tornerà a visualizzare il sito come su desktop.
Questa impostazione è stata scelta perché il wiki fu a suo tempo impostato con grafiche a larghezza fissa e tabelle anche complesse che si adattano prevalentemente a una visualizzazione da desktop. È possibile quindi incappare in pagine con contenuti che mal si adattino alla larghezza dei piccoli schermi, pertanto ruotando il dispositivo si ha comunque la possibilità di visualizzare la grafica originaria.
Ad eccezione di alcune pagine che dovranno essere manualmente ritoccate e inevitabili piccoli ritocchi alle impostazioni del wiki, i risultati sono andati oltre alle aspettative. Quindi non è escluso che più in là il responsive design possa essere esteso fino alla visualizzazione su desktop.
Aspetti tecnici
Il wiki utilizza la piattaforma MoinMoin. Il suo aspetto grafico è dovuto all'introduzione del tema Light nell'ormai lontano 2011.
Per chi fosse interessato le modifiche hanno riguardato giusto tre file:
light.py: al suo interno, insieme al vecchio header è stato inserito il codice html e javascript per disegnare il nuovo header in stile mobile.
common.css - screen.css: sono questi i due principali file responsabili per l'aspetto stilistico del wiki. Al loro interno sono state inserite media queries per adattare gli elementi delle pagine e per rendere alternativamente visibile il nuovo o il vecchio header a seconda della larghezza dello schermo.
In definitiva MoinMoin ci ha piacevolmente sorpreso mostrando un buon grado di adattabilità
Body-parser does an amazing work for us: it parses incoming request bodies, and we can then easily use the req.body property according to the result of this operation.
But, there are some cases in which we need to access the raw body in order to perform some specific operations.
Verifying Shopify webhooks is one of these cases.
Shopify: verifying webhooks
Shopify states that:
Webhooks created through the API by a Shopify App are verified by calculating a digital signature. Each webhook request includes a base64-encodedX-Shopify-Hmac-SHA256 header, which is generated using the app’s shared secret along with the data sent in the request.
We will not go into all the details of the process: for example, Shopify uses two different secrets to generate the digital signature: one for the webhooks created in the Admin –> Notifications interface, and one for the ones created through APIs. Anyway, only the key is different but the process is the same: so, I will assume that we have created our webhooks all in the same way, using JSON as object format.
A function to verify a webhook
We will now code a simple function that we will use to verify the webhook.
The first function’s param is the HMAC from the request headers, the second one is the raw body of the request.
function verify_webhook(hmac, rawBody) {
// Retrieving the key
const key = process.env.SHOPIFY_WEBHOOK_VERIFICATION_KEY;
/* Compare the computed HMAC digest based on the shared secret
* and the request contents
*/
const hash = crypto
.createHmac('sha256', key)
.update(rawBody, 'utf8', 'hex')
.digest('base64');
return(hmac === hash);
}
We use the HMAC retrieved from the request headers (X-Shopify-Hmac-Sha256), we retrieve the key stored in the .env file and loaded with the dotenv module, we compute the HMAC digest according to the algorithm specified in Shopify documentation, and we compare them.
We use the crypto module in order use some specific functions that we need to compute our HMAC digest.
Crypto is a module that:
provides cryptographic functionality that includes a set of wrappers for OpenSSL’s hash, HMAC, cipher, decipher, sign, and verify functions.
NOTE: it’s now a built-in Node module.
Retrieving the raw body
You have probably a line like this one, with or without customization, in your code:
So your req.body property contains a parsed object representing the request body.
We need to find a way to “look inside the middleware” and to add somehow the information regarding the request status, using our just-defined function: is it verified or not?
Now, we know that the json() function of body-parser accepts an optional options object, and we are very interesting to one of the possible options: verify.
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.
Basically, we check if the buffer is empty: in such case, we consider the message not verified.
Otherwise, we retrieve the raw body using the toString() method of the buf object using the passed encoding (default to UTF-8).
We retrieve the X-Shopify-Hmac-Sha256.
After we have computed the verify_webhook() function, we store its value in a custom property of the req object.
Now, in our webhook code we can check req.custom_shopify_verified in order to be sure that the request is verified. If this is the case, we can go on with our code using the req.body object as usual!
Another idea could be to stop the parsing process: we could do this throwing an error during the verify_webhook_request function.
Conclusion
Please leave your comment if you want to share other ways to accomplish the same task, or generically your opinion.
Ubuntu 14.04 LTS Trusty Tahr raggiungerà la fine del suo ciclo di vita il 30 aprile 2019 e, da allora, sarà disponibile Ubuntu 14.04 LTS - ESM, ovvero Extended Security Maintenance. Si tratta di una funzionalità disponibile con Ubuntu Advantage, il pacchetto di supporto commerciale di Canonical, oppure che può anche essere acquistata su base stand-alone. L'Extended Security Maintenance è stato creato per aiutare a semplificare il processo di migrazione verso le nuove piattaforme aggiornate, mantenendo gli standard di conformità e sicurezza.
L'introduzione di Extended Security Maintenance per Ubuntu 12.04 LTS è stato un passo importante per Ubuntu, portando patch di sicurezza critiche e importanti oltre la data di fine vita di Ubuntu 12.04. ESM viene utilizzato dalle organizzazioni per risolvere problemi di sicurezza e conformità mentre gestisce il processo di aggiornamento a una versione più recente di Ubuntu in grado di garantire pieno supporto. La disponibilità di ESM per Ubuntu 14.04 significa che l'arrivo dello status di End of Life di Ubuntu 14.04 LTS Trusty Tahr nell'aprile 2019 non dovrebbe influire negativamente sulla sicurezza e sulle conformità delle organizzazioni che lo utilizzano ancora come sistema operativo in essere. In totale, ESM ha fornito oltre 120 aggiornamenti, comprese correzioni per oltre 60 vulnerabilità con priorità alta e critica, per gli utenti di Ubuntu 12.04.
Ancora una volta, lo abbiamo segnalato anche in passato sulla newsletter (Ubuntu in prima linea per la sicurezza), risulta estremamente chiaro come Canonical metta la sicurezza al centro di Ubuntu, oltre che nelle pratiche e nella architettura dei prodotti a esso relativi sia lato business che lato consumer.
Ad Aprile cesserà il supporto per la gloriosa Ubuntu 14.04.
Questo vuol dire che nella sezione contenente i resoconti delle installazioni di Ubuntu sui computer portatili, molti dei link alle guide passeranno nella famigerata colonna Guide da aggiornare.
Fortunatamente i computer che funzionano con Ubuntu 14.04 supportano anche le versioni successive, come Ubuntu 16.04 e 18.04 (nella maggior parte dei casi).
Sul tuo portatile hai continuato a installare versioni successive alla 14.04? Aggiornare la relativa pagina sarà davvero semplice
Come aggiornare la pagina?
Quando si accede all'editor per modificare una pagina, si noterà nella parte in alto la macro Informazioni che si presenta in questo modo: <<Informazioni(forum="..."; rilasci="13.10 14.04";)>>
Al suo interno è presente la voce rilasci che contiene tra virgolette le versioni con le quali la guida è stata testata. Bene, non occorre far altro che aggiungere il numero di versione di uno dei rilasci di Ubuntu attualmente supportati.
Pertanto, supponendo di aver testato con successo la guida con... ad esempio la 18.04, basta aggiungere il numero all'interno della macro che diviene: <<Informazioni(forum="..."; rilasci="13.10 14.04 18.04";)>>
Niente di che, vero?
In generale è buona cosa...
Per ovvi motivi abbiamo messo in primo piano le pagine sui portatili testate con la 14.04. Il discorso si estende comunque anche alle pagine già presenti nella colonna delle guide da aggiornare e, ancora più in generale, a qualsiasi tipo di guida presente nel wiki.
Come vedi se si incappa in una pagina testata con una versione obsoleta di Ubuntu, per confermarne la validità con una versione supportata è una questione di attimi.
Con l'inizio del 2019 arriva un'importante novità riguardante la Documentazione Wiki.
Il Gruppo Doc ha deciso di lasciare maggiore iniziativa agli utenti del wiki in materia di creazione di nuove guide e aggiornamento delle pagine esistenti. Cosa significa? Facciamo un passo indietro...
Da sempre il Gruppo Doc si è preoccupato di assistere e supervisionare il lavoro svolto dagli utenti. Cioè essere presenti sul forum per valutare se una guida proposta fosse necessaria o attinente agli scopi della documentazione... oppure, una volta che un utente ha svolto il lavoro, tempestivamente revisionare e adattare i contenuti agli Standard, inserire il link alla guida nel portale di appartenenza, ecc..
Tutto questo per garantire una crescita razionale e ben ordinata della documentazione.
Il lato debole di questo metodo di lavoro è che non sempre può essere garantita una presenza continuativa da parte dello staff, sebbene negli ultimi sette anni ci siamo in massima parte riusciti
La nostra attività di supporto e supervisione andrà avanti, è stato però deciso di aggiornare l'iter in modo tale che anche in nostra assenza un utente possa agire di sua iniziativa. Questo per evitare appunto che una guida rimanga a lungo bloccata prima di essere pubblicata.
In sostanza si è cercato di mantenere le buone pratiche attuate in questi anni e di aggiungere un pizzico di responsabilità in più da parte degli utenti.
Queste le pagine che riportano le principali modifiche al metodo di lavoro:
GruppoDocumentazione/Partecipa: mostra gli step da seguire, con alcune novità come la possibilità di aggiornamento degli indici dei portali tematici e l'aggiornamento della tabella delle guide svolte.
GuidaWiki/Standard: schematizzata la parte del formato del wiki-testo, sono state introdotte le frasi standard e alcuni accorgimenti stilistici che chiediamo agli utenti di seguire quanto più possibile.
Epson Multi: aggiornamento della guida per installare stampanti mono o multifunzione Epson su Ubuntu.
Scanner Epson: aggiornamento della guida per installare scanner Epson su Ubuntu.
Portale Installazione
Installare Ubuntu: revisione dell'intera guida con l'inserimento di nuovi contenuti.
Portale Internet e Rete
Accelerazione Hardware: nuova guida che illustra la procedura per abilitare l'accelerazione hardware su Chromium e gli altri browser da esso derivati, quali Google Chrome, Opera e Vivaldi.
Opera: aggiornamento della guida per le nuove versioni del browser.
Thunderbird: aggiornamento della guida per il famoso client di posta elettronica libero, che nelle ultime versioni utilizza un nuovo motore di rendering.
Portale Programmazione
CMake Gui: aggiornamento guida su questo strumento, utile per controllare il processo di compilazione di software.
Pip: nuova guida all'installazione e al primo utilizzo su Ubuntu di pip, gestore di pacchetti per Python.
Tempo fa avevo scritto questo articolo, in cui indicavo i passi per l’installazione dello scanner in ogetto .
Recentemente ho fatto l’aggiornamento da Xenial Xerus a Bionic Beaver, e nel seguire le indicazioni presenti nell’articolo indicato prima, lo scanner ancora non veniva rilevato, cosi facendo la ricerca nel wiki italiano (qui la guida), ho scoperto che vanno aggiunti dei passaggi aggiuntivi, quindi riepilogando:
scaricare i driver a questo indirizzo;
installare i tre pacchetti, iscan-data_[versione corrente]_all.deb, iscan_[versione corrente]~usb0.1.ltdl7_i386.deb e iscan-plugin-gt-s600_[versione corrente]_i386.deb, oppure lanciare lo script install.sh presente all’interno della cartella (prima va reso eseguibile)
installare i pacchetti libltdl3, libsane-extras e sane;
editare il file /etc/sane.d/dll.conf e commentare la riga #epson;
editare il file /etc/udev/rules.d/45-libsane.rules e aggiungere le seguenti righe:
digitare il seguente comando, in base all’architettura del proprio pc:
Per architettura 32 bit sudo ln -sfr /usr/lib/sane/libsane-epkowa* /usr/lib/i386-linux-gnu/sane
Per architettura 64 bit sudo ln -sfr /usr/lib/sane/libsane-epkowa* /usr/lib/x86_64-linux-gnu/sane
infine digitare il seguente comando per riavviare udev:
The operation takes some time (proportional to the data size that has to be imported). You can safely close your terminal, the operation will continue.
Checking operations status
Import and export can take time (import takes time even with a very small database).
It’s easy from the web console to delete a document, and it’s easy to do it programmatically.
It’s easy on the CLI, too.
But the nightmare of the web console (and of the programmatic approach, too) is that it does not exist a simple and fast way to recursively delete a collection.
Now, assuming you are testing something and your firestore is full of garbage, you might want to start from scratch deleting every collection.
This is possible running:
firebase firestore:delete --all-collections
If you are looking at your database in the firebase console, please remember to refresh if the UI is not updated (that means that you still see the root collections).
Conclusion
This concludes the article for now.
I hope that the Firestore-side will be developed with other features and commands, because right now is very limited.
One of the most feature I can think of, generally speaking, would be the chance to “tail” logs in the shell.
I would be more than happy if someone can integrate with useful tools and additional stuff.
The problem: setting max_execution_time / using set_time_limit() I don’t get the wanted results
If you are here, you are probably looking for an answer to the question:
why set_time_limit() is not working properly and the script is killed before?
or
why ini_set(‘max_execution_time’, X), is not working properly and the script is killed before?
The answer is not exactly one, because there are many variables to consider.
The concepts: PHP itself vs FPM
Anyway, I will list basic concepts that will help to find the best path to your answer.
The first, brutal, note is: directives you specify in php-fpm.conf are mostly not changeable from ini_set().
Please also note that set_time_limit() is a almost nothing more than a convenience wrapper on ini_set() in the background, so the same concept applies.
For simplicity, in this post I will talk about set_time_limit(), but the same applies to ini_set(whatever_related_to_time).
According to what we already said, it’s important to understand that set_time_limit() can only “overwrite” the max_execution_time directive, because
they only affect the execution time of the script itself
On the other hand, directives like request_terminate_timeout are related to FPM, so we are talking about the process management level here:
should be used when the ‘max_execution_time’ ini option does not stop script execution for some reason.
Web server dependent directives
What we have seen so far aren’t the only variables in the game.
You should check your web server configuration, too, to be sure nothing is playing a role in your timeout problem.
A very common example is the fastcgi_read_timeout, defined as:
a timeout for reading a response from the FastCGI server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the FastCGI server does not transmit anything within this time, the connection is closed.
So…what to do?
The basic answer is: have perfectly clear what was defined by web server (ex.: fastcgi_read_timeout), what defined by FPM (ex.: request_terminate_timeout) and what was defined by PHP itself (ex.: max_execution_time).
For example, if request_terminate_timeout is set to 1 second, but max_execution_time is set to 2 seconds, the script will end after 1 second (so, whichever comes first, even if you have set max_execution_time with ini_set() in your script) .
A possible solution is simply using max_execution_time in place of request_terminate_timeout (but use it, too, to define a solid maximum upper-bound).
The max_execution_time description in the official documentation has an hint about this common problem, that is:
Your web server can have other timeout configurations that may also interrupt PHP execution. Apache has a Timeout directive and IIS has a CGI timeout function. Both default to 300 seconds. See your web server documentation for specific details.
Ecco le novità introdotte nella documentazione della comunità italiana di Ubuntu.
Aggiornamenti per il rilascio di Ubuntu 18.10!
Questo mese ha visto l'arrivo della nuova versione Ubuntu 18.10. Si tratta di una versione "intermedia" con supporto di 9 mesi e sarà quindi supportata fino a Luglio 2019. Come di consueto il Gruppo Documentazione si è attivato per aggiornare una corposa lista di pagine fra cui quelle relative al download, all'installazione e all'aggiornamento del sistema, ai repository.. e molte altre ancora.
Per maggiori dettagli consulta la pagina GruppoDocumentazione/Cosmic.
Portale Ambiente Grafico
Dolphin Menu Stampa: ottenere voci relative alle funzionalità di stampa nel menù contestuale del file manager Dolphin in KDE5 attivabile tramite clic destro.
Portale Amministrazione Sistema
Apt: istruzioni sull'utilizzo di APT, il sistema di gestione dei pacchetti .deb, predefinito in Ubuntu.
Hp15-bs148nl: resoconto di installazione di Ubuntu su questo portatile.
Portale Installazione
Creazione LiveUsb: aggiornata la tabella dei programmi utili per la creazione di chiavette USB avviabili con Ubuntu.
Portale Internet e Rete
Firefox ESR: installazione e utilizzo del browser web Firefox ESR, versione ufficiale con supporto esteso di Firefox.
Irssi: istruzioni utili per l'installazione e l'utilizzo di Irssi, un client IRC che permette di comunicare in modalità testuale dalla riga di comando.
MlDonkey: installazione e utilizzo di questo programma per il peer to peer estremamente potente con funzione client e server.
Pidgin: installazione e configurazione di questo client di messaggistica istantanea multi-protocollo.
Telegram: installare la versione desktop di Telegram su Ubuntu.
Venerdì 31 agosto prossimo parteciperò a ESC 2018 che si tiene presso il Forte Bazzera, nelle vicinanze dell'Aeroporto di Venezia.
ESC - End Summer Camp - si svolge dal 29 agosto al 2 settembre, ed è un evento particolarmente interessante, ricco di appuntamenti su temi che riguardano open source, hacking, open data e... beh, il programma è talmente vasto che è impossibile riassumerlo in poche righe, dategli un'occhiata anche voi!
Il sottoscritto sarà a ESC per parlare di Ubuntu Touch, che è l'attività che ultimamente mi sta prendendo più tempo, del mio poco tempo libero. Dopo che l'anno scorso Canonical ha abbandonato lo sviluppo di Ubuntu Touch, una comunità - sempre più numerosa - si è aggregata attorno al progetto UBports, inizialmente portato avanti dal solo benemerito Marius Gripsgard, che ha proseguito il lavoro da dove Canonical si era fermata. Proprio pochi giorni fa, UBports ha ufficialmente rilasciato OTA4, il primo aggiornamento basato su Ubuntu 16.04, ma soprattutto una serie lunghissima di bug fix e miglioramenti, dopo un lungo lavoro di assestamento e pulizia.
Adesso è tempo che tutto il mondo sappia che Ubuntu Touch è vivo, e probabilmente la migliore cosa che sta accadendo in tutto il panorama dell'Open Source mondiale. Passato lo scandalo delle rivelazioni di Snowden, sembra che le persone si siano dimenticate (rassegnate?) di quanto invasiva sia la sistematica violazione della privacy personale operata da tuttigli smartphonein commercio. Pratica questa che espone la propria vita non solo alle ben note aziende tecnologiche americane del settore (Apple, Google, Facebook, Yahoo), ma anche a tutti i pirati che ogni giornorubano tonnellatedi dati personalialle stesse aziende.
Ubuntu Touch è la risposta open source alla domanda di maggiore controllo e tutela dei propri dati, oltre che un sistema operativo davvero libero e aperto.
Se venite a Forte Bazzera venerdì 31 agosto alle 11.30 vi parlerò di questo (e anche tanto altro!). Vi aspetto!
Ecco le novità introdotte nella documentazione della comunità italiana tra aprile e giugno 2018.
Aggiornamenti per il rilascio di Ubuntu 18.04 LTS!
Aprile ha visto l'arrivo della nuova versione Ubuntu 18.04. Si tratta di una versione LTS e sarà quindi supportata per i prossimi 5 anni. Come di consueto il Gruppo Documentazione si è attivato per aggiornare una corposa lista di pagine fra cui quelle relative al download, all'installazione e aggiornamento del sistema, ai repository.. e molte altre ancora.
Pubblicato il nuovo Portale Voting Machine Scuola Lombardia!
Il 2018 ha visto l'arrivo di un evento inaspettato nel mondo linuxiano. Una miriade di "voting machine" (i dispositivi utilizzati in un recente referendum in Lombardia) sono state riconvertite per essere utilizzate nelle scuole lombarde. Come sistema è stato scelto Ubuntu GNOME 16.04 LTS.
È stato quindi creato un portale con informazioni utili al riguardo. Ringraziamo Matteo Ruffoni per averci portato a conoscenza dei fatti lo scorso Marzo durante lo svolgimento del Merge-it a Torino.
VotingMachineScuolaLombardia: un nuovo portale dedicato alla voting machine, il dispositivo di voto elettronico fornito alle scuole della Lombardia con preinstallato Ubuntu!
/Utilizzo: caratteristiche e funzionalità del sistema operativo Ubuntu GNOME presente nella voting machine.
/ProgrammiPreinstallati: applicazioni di utilizzo più comune preinstallate in Ubuntu GNOME 16.04 LTS.
/Gcompris: utilizzo dell'applicazione GCompris, una suite di giochi didattici per bambini da 2 a 10 anni.
/AccountUtente: operazioni da compiere per creare nuovi utenti diversi dall'utente amministratore.
Si segnala inoltre il gruppo di discussione Lavagna libera in cui molti docenti si occupano in prima persona dell'introduzione del software libero nelle scuole. Fra gli argomenti discussi ovviamente si possono trovare molte testimonianze dirette sull'utilizzo delle voting machine.
Portali Ambiente Grafico e Amministrazione Sistema
Script Stampa: ottenere voci relative alle funzionalità di stampa nel menù contestuale dei file manager Nautilus, Caja e Nemo attivabili tramite clic destro.
Kodi: programma open source e multipiattaforma per la gestione di un completo media center o Home theater PC.
Rosegarden: audio/midi sequencer ed editor di spartiti musicali.
Portali Server e Ufficio
Xampp: utile programma multipiattaforma che permette in pochi passaggi un'installazione semplificata di server web Apache, MySQL, PHP, Perl e altri strumenti.
Il 26 Aprile è stato finalmente rilasciato Ubuntu 18.04 LTS, nome in codice Bionic Beaver.
Come forse già saprete, si tratta di una release molto attesa ed importante: è la prima Long Term Support con preinstallato il desktop GNOME.
Sarà supportata fino al 2023. Per scoprirne tutte le novità potete consultare questa pagina.
Come di consueto, il Gruppo Documentazione ha organizzato l'opera di aggiornamento e revisione delle principali pagine del wiki.
Ma molto lavoro rimane ancora da fare. Infatti tante guide hanno bisogno di essere verificate o aggiornate con la nuova versione 18.04.
Come contribuire
La maggior parte delle guide contiene, sotto l'indice in alto a destra, una dicitura simile a questa: «Guida verificata con Ubuntu: 14.04 16.04 17.10».
Vi siete accorti che una guida già esistente è valida anche per Ubuntu 18.04?
Perfetto! fatecelo sapere attraverso la discussione dedicata presente nel forum (trovate il link sempre sotto l'indice), oppure contattateci. Se invece siete già iscritti al Wiki potete verificarla direttamente voi.
Vi siete accorti che una guida esistente contiene istruzioni non valide con Ubuntu 18.04?
Bene, potete farcelo sapere sempre contattandoci. Inoltre se siete iscritti al Wiki potete provvedere da soli a correggerla.
Non siete ancora iscritti al Wiki? cosa aspettate?! iniziate da subito a contribuire! 🙂
"Il bello della sconfitta sta innanzitutto nel saperla accettare. Non sempre è la conseguenza di un demerito. A volte sono stati più bravi gli altri. Più sei disposto a riconoscerlo, quando è vero, quando non stai cercando di costruirti un alibi, più aumentano le possibilità di superarla. Anche di ribaltarla. La sconfitta va vissuta come una pedana di lancio: è così nella vita di tutti i giorni, così deve essere nello sport. Sbaglia chi la interpreta come uno stop nella corsa verso il traguardo: bisogna sforzarsi di trasformarla in un riaccumulo di energie, prima psichiche, nervose, e poi fisiche." (Enzo Bearzot)
Quando ero giovane, molto giovane, giocavo a calcio. Tutti i bambini giocavano a calcio. Era lo sport preferito, anzi, era il passatempo preferito dei bambini. In estate si stava praticamente tutto il giorno sul campetto vicino a casa, a tirare calci al pallone. Quando non eravamo al campetto, eravamo sul cortile di casa, sempre a tirare calci al pallone. Io ero universalmente considerato scarso - forse il più scarso. Giocavo in difesa, ma spesso finivo a giocare in porta, dove nessuno voleva mai stare.
Fatalmente, ogni tanto si rompeva qualche vetro: è incredibile quanto facilmente si possa rompere un vetro, pur tirando pianissimo il pallone. Mamma si "vendicava" a modo suo: il giardino limitrofo al cortile era cosparso di rose con spine così appuntite da bucare anche il miglior pallone di cuoio. Si può dire che la mia infanzia sia trascorsa così, tra vetri rotti, palloni bucati e jeans rovinati dalle scivolate sull'erba. Altri tempi.
Non lo so come sia finito il calcio al suo attuale livello, qualche anno fa ne descrissi alcune disgrazie, alcune sono ancora attuali, altre sono addirittura peggiorate. Ricordo anche un tentativo di Roberto Baggio di cambiare direzione, finito nel nulla. Adesso però basta.
Nel mio immaginario romantico, i principali sentimenti che accompagnano lo sport sono il divertimento, lo spirito olimpico di partecipazione, l'agonismo positivo che insegna a migliorare e superare i propri limiti. Potrò anche sbagliarmi, ma vedo poco di tutto questo nel calcio italiano.
Gioire delle sconfitte altrui, augurare il peggio all'avversario, vedere solo le colpe altrui, immaginare complotti a favore di questa o quella squadra, rende persone tristi e astiose. Rende le persone peggiori, e (esagero) il anche il mondo un po' peggio di quello che era prima.
Preferisco spendere le mie poche energie per costruire un mondo - quel poco che mi circonda - un po' migliore di quello che ho trovato.
(Nella foto: Bearzot gioca a scopone al ritorno dai vittoriosi mondiali di Spagna 1982, in coppia con Causio e contro Zoff e il presidente Pertini).
La prossima settimana parteciperò a UbuCon Europe 2018, a Xixòn (Gijon), nelle Asturie, in Spagna.
UbuCon Europe
UbuCon Europe è l'appuntamento europeo di sviluppatori e utenti di Ubuntu (e non solo). È l'evento che ha sostituito gli Ubuntu Developer Summit, organizzati da Canonical. UbuCon è un'opportunità unica per incontrare di persona le tantissime persone che in un modo o nell'altro partecipano allo sviluppo e al supporto di Ubuntu. Ricordo che Ubuntu (anche se fondata dal benemerito dittatore Sudafricano Mark Shuttleworth) è un progetto nato in Europa(la Gran Bretagna ne fa ancora parte, almeno geograficamente ;-) ), e in Europa le comunità di volontari ubunteri sono particolarmente attive.
Per me sarà l'occasione di rivedere alcuni amici che hanno partecipato all'avventura degli Ubuntu Phone Insiders, e conoscere di persona molte delle persone che ho solo visto o letto su Internet. È sempre un'esperienza piacevole parlare a quattrocchi con chi si è abituati a leggere sulle mail o sui blog, perché il contatto umano e l'esperienza diretta valgono più di un milione di parole.
L'evento si tiene a Xixòn (Gijon in spagnolo), che è la città di Marcos Costales, contact della Comunità Spagnola di Ubuntu, e a cui si deve l'organizzazione di questa edizione di UbuCon Europe. Marcos, che conosco dai tempi degli Insiders, è una delle persone più attive nell'ambito di Ubuntu, e anche sviluppatore di uno dei più bei programmi per Ubuntu Phone: uNav. Devo proprio a Marcos la mia partecipazione: è stato lui che mi ha spinto a lasciare la mia pigrizia sul divano e prenotare il volo per Oviedo.
La scaletta dell'evento è molto ricca: nel momento migliore (o peggiore, visto che il sottoscritto non è dato il dono dell'ubuquità) ci sono ben quattro talk diversi contemporaneamente. Per me sarà un problema decidere quale seguire, sono così tante le cose da sapere e imparare! Alla fine credo mi dovrò focalizzare sugli argomenti che già seguo (poco) nell'ambito della comunità, evitando di impelagarmi in altri nuovi progetti.
Il mio intervento
Il mio talk sarà sabato mattina, e parlerò di "Social Media for Open Source Communities". Dello stesso argomento ho parlato (slide su Slideshare) anche al recente MERGE-it 2018, ma questo talk sarà più ampio, perché parlerò anche di strategie per i social media. Ma soprattutto sarà il mio primo talk in inglese: Sant'Aldo Biscardi, protettore degli Italiani che parlano male inglese, aiutami tu!
Per me si tratta di una bella sfida, spero di riuscire a non dire troppe cavolate, ma del resto l'argomento social è decisamente sottovalutato in molte comunità, quando invece il contatto diretto che consentono i canali sociali è importantissimo sia in fase di promozione, che per il supporto ai propri "clienti".
Resoconto e live tweeting
I talk e i workshop si sviluppano da venerdì 26 a domenica 28 Aprile, io cercherò di presenziare a quanti più possibile... ma non più di uno alla volta! Purtroppo non è previsto un web streaming, e neanche riprese video dei talk (anche se ho proposto a Marcos una soluzione che forse permetterà di coprire una parte dei talk).
Vi darò un resoconto dell'evento su questo blog, ma soprattutto seguirò l'evento anche con un live tweeting su Twitter, quindi seguitemi lì per essere aggiornati in tempo reale! :-) Si tratta della mia prima volta a un evento internazionale dedicato al mondo FLOSS, e davvero non vedo l'ora!