01 November 2022

Categoria: 

È disponibile la newsletter N° 035/2022 della comunità di ubuntu-it. In questo numero:

  • Rivelato il nome in codice di Ubuntu 23.04
  • Ecco come assume Canonical
  • Ubuntu 22.10 ottiene un primo aggiornamento di sicurezza
  • Prossimamente su Ubuntu Desktop
  • Full Circle Magazine Issue #186 in inglese
  • Il kernel Linux 5.19 ha raggiunto l'End Of Life
  • LibrePlanet 2023: scadenze entro il 2 novembre
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 01 November 2022 17.12

27 October 2022

To create a Kubernetes deployment, we must specify the matchLabels field, even though its value must match the one we specify in the template. But why? Cannot Kubernetes be smart enough to figure it out without us being explicit?

A deep dive in K8s deployment matchLabels field

Illustration by unDraw+.

Did you know? K8s is short for Kubernetes ‘cause there are 8 letters between K and S.

A Kubernetes (K8s) Deployment provides a way to define how many replicas of a Pod K8s should aim to keep alive. I’m especially bothered by the Deployment spec’s requirement that we must specify a label selector for pods, and that label selector must match the same labels we have defined in the template. Why can’t we just define them once? Why can’t K8s infer them on its own? As I will explain, there is actually a good reason. However, to understand it, you would have to go down a rabbit hole to figure it out.

A deployment specification

Firstly, let’s take a look at a simple deployment specification for K8s:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx # Why can't K8s figure it out on its own?
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

This is a basic deployment, taken from the official documentation, and here we can already see that we need to fill the matchLabels field.

What happens if we drop the “selector” field completely?

➜ kubectl apply -f nginx-deployment.yaml
The Deployment "nginx-deployment" is invalid:
* spec.selector: Required value

Okay, so we need to specify a selector. Can it be different from the “label” field in the “template”? I will try with:

matchLabels:
      app: nginx-different
➜ kubectl apply -f nginx-deployment.yaml
The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx"}: `selector` does not match template `labels`

There are usually good reasons behind what it seems a not well-thought implementation, and this is true here as well, as we’ll see.

As expected, K8s doesn’t like it, it must match the template. So, we must fill a field with a well-defined value. It really seems that a computer could do that for us, why do we have to specify it manually? It drives me crazy having to do something a computer could do without any problem. Or could it?

Behind a deployment

Probably, you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.

How does a deployment work? Behind the curtains, when you create a new deployment, K8s creates two different objects: a Pod definition, using as its specification what is available in the “template” field of the Deployment, and a ReplicaSet. You can easily verify this using kubectl to retrieve pods and replica sets after you have created a deployment.

A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

A ReplicaSet needs a selector that specifies how to identify Pods it can acquire and manage: however, this doesn’t explain why we must specify it, and why K8s cannot do it on its own. In the end, a Deployment is a high-level construct that should hide ReplicaSet quirks: such details shouldn’t concern us, and the Deployment should take care of them on its own.

Digging deeper

Understanding how a Deployment works doesn’t help us find the reason of this particular behavior. Given that Googling doesn’t seem to bring any interesting result on this particular topic, it’s time to go to the source (literally): luckily, K8s is open source, so we can check its history on GitHub.

Going back in time, we find out that, actually, K8s used to infer it the matchLabels field! The behavior was removed with apps/v1beta2 (released with Kubernetes 1.8), through Pull Request #50164. Such pull request links to issue #50339, that, however, has a very brief description, and lacks the reasoning behind such a choice.

The linked issues are rich of technical details, and they have many comments. If you want to understand exactly how kubectl apply works, take a look!

Luckily, other issues provide way more context, as #26202: it turns out, the main problem with defaulting is when in subsequent updates to the resource, labels are mutated: the patch operation is somehow fickle, and apply breaks when you update a label that was used as default.

Many other concerns have been described by Brian Grant in deep in the issue #15894.

Basically, assuming a default value, creates many questions and concerns: what’s the difference between explicitly setting a label as null, and leaving it empty? How to manage all the cases where users left the default, and now they want to update the resource to manage themselves the label (or the other way around)?

Conclusion

Given that in K8s everything intend to be declarative, developers have chosen that explicit is better than implicit, especially for corner cases: specifying things explicitly allows a more robust validation on creation and update time, and remove some possible bugs that existed due to uncertainties caused by lack of clarity.

Shortly after dropping the defaulting behavior, developers also made the labels immutable, to make sure behaviors were well-defined. Maybe in the future labels will be mutable again, but to have that working, somebody needs to design a well-though document explaining how to manage all the possible edge cases that could happen when a controller is updated.

I hope you found this deep dive into the question interesting. I spent some time on it, since I was very curious, and I hope that the next person with the same question can find this article and get an answer quicker than what it took me.

If you have any question, or feedback, please leave a comment below, or write me an email at hello@rpadovani.com.

Ciao,
R.

il 27 October 2022 00.00

25 October 2022

Categoria: 

È disponibile la newsletter N° 034/2022 della comunità di ubuntu-it. In questo numero:

  • Ecco tutte le novità riguardanti Ubuntu 22.10 Kinetic Kudu
  • Canonical rilascia un importante aggiornamento di sicurezza
  • Pubblicità nel terminale per passare ad Ubuntu pro: gli utenti si infastidiscono
  • Arrivano altri aggiornamenti di sicurezza per i sistemi Ubuntu
  • È pronta per i primi test pubblici la versione del Kernel Linux 6.1
  • Anche la Raspberry Foundation soffre della carenza di hardware 
  • Riscontrato un bug nella versione del kernel linux 5.19.12
  • Un po' di latenza con gli ultimi sviluppi del Kernel Linux: ecco il perché!
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 25 October 2022 20.10

23 October 2022

Da qualche giorno, e anche ieri sera, parlandone assieme al mio mentore Mauro IU0NDT, lui stesso mi ha consigliato di installare la versione 7 del node manager Supermon (la nuova versione 7.1 + l'ho trovata bella e pronta), che può coesistere come servizio web assieme all'altro gestore  nodi precedente.

The post AllStarLink – installare il node manager Supermon sul sistema ASL first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 23 October 2022 08.00

19 October 2022

Pochi giorni fa è stata distribuita la versione Stabile 2.0.0 dell'App Android DVSwitch Mobile. In questi giorni ho fatto diverse prove utilizzando questa App Android sui miei nodi ASL e collegandone altri anche in modalità WT (Web Trasceiver).

The post DVSwitch Mobile – aggiornamento alla versione stabile 2.0.0 (173) first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 19 October 2022 08.00

18 October 2022

Categoria: 

È disponibile la newsletter N° 033/2022 della comunità di ubuntu-it. In questo numero:

  • L'importanza della documentazione
  • Nuova estensione in VSCode per Vanilla CSS Fremework
  • L'Extension Manager di GNOME si aggiorna!
  • LibreOffice 7.4.2 è stato rilasciato!
  • Nuovo aggiornamento per il driver grafico NVIDIA 520.56.06
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Statistiche del gruppo sviluppo
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 18 October 2022 20.05

Da qualche giorno è uscita la nuova versione del firmware in oggetto, nella quale sono state aggiunte diverse cose, che potrete leggere consultando il documento presente nel pacchetto della distribuzione firmware.

The post Radio Anytone 878 – firmware upgrade versione 3.0 first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 18 October 2022 13.16

11 October 2022

Categoria: 

È disponibile la newsletter N° 032/2022 della comunità di ubuntu-it. In questo numero:

  • Canonical lancia gli abbonamenti personali gratuiti di Ubuntu Pro per un massimo di cinque macchine
  • Come installare il Kernel Linux 6.0 su Ubuntu 22.10
  • Full Circle Magazine Issue #185 in inglese
  • Rilasciato il Kernel Linux 6.0
  • Arrivano i Free Software Awards
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Statistiche del gruppo sviluppo
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 11 October 2022 20.06

04 October 2022

Categoria: 

È disponibile la newsletter N° 031/2022 della comunità di ubuntu-it. In questo numero:

  • Gli utenti di Ubuntu ricevono un nuovo aggiornamento di sicurezza per il Kernel Linux
  • Ubuntu 22.10 Beta è finalmente disponibile
  • Una nuova estensione per aggiungere la gestione Bluetooth nelle impostazioni rapide di GNOME
  • Un nuovo software open source per il riconoscimento vocale
  • Raspberry Pi OS ottiene una serie di aggiornamenti
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Statistiche del gruppo sviluppo
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 04 October 2022 20.12

02 October 2022

Grazie alla collaborazione attiva degli amici OM Tullio IZ2FTR e Pericle IK2UIZ, ieri sera è stato attivato il nuovo canale Telegram in oggetto.
Era un canale Telegram ancora non esistente; è stato realizzato per monitorare i transiti del TG 22406 di Brandmeister, dedicato al traffico della rete AllStarLink.

The post È disponibile il nuovo canale Telegram Last Heard AllStarLink per il TG 22406 DMR first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 02 October 2022 09.58

30 September 2022

Una settimana fa, il bravo programmatore Roger VK3KYY ha pubblicato un nuovo post sul forum OpenGD77.com, annunciando la nuova notizia.
Anche la radio Retevis RT-3S, ora potrà installare il firmware modificato OpenGD77, anche se ancora in versione Beta.

The post Retevis RT-3S – aggiornamento firmware OpenGD77 first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 30 September 2022 07.30

28 September 2022

Quasi un mese fa, il bravo collega e amico Radioamatore Andrea IW4EHJ mi ha dato  una mano per rimettere a nuovo questo apparato usato ma ben tenuto, acquistato felicemente dal mio amico Radioamatore, Carlo IW6DSL. Come avevo già promesso ad alcuni OM, ho deciso di scrivere un articolo anche su questo apparato e la procedura in oggetto.

The post Yaesu FT1DE – aggiornamento firmware radio e DSP first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 28 September 2022 07.30

27 September 2022

Categoria: 

È disponibile la newsletter N° 030/2022 della comunità di ubuntu-it. In questo numero:

  • Canonical aderisce alla Connectivity Standards Alliance
  • È stato rilasciato GNOME 43
  • Linus Torvalds ha confermato l'introduzione di Rust all'interno del Kernel Linux
  • PipeWire supporterà l'audio Bluetooth LE
  • È stato reso disponibile GNU C Language Intro and Reference Manual
  • Firefox 106 promette nuova funzionalità
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Statistiche del gruppo sviluppo
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 27 September 2022 20.38

20 September 2022

Categoria: 

È disponibile la newsletter N° 029/2022 della comunità di ubuntu-it. In questo numero:

  • Call for Papers Linux Day 2022
  • Retbleed, una nuova variante di Spectre
  • Arriva un nuovo aggiornamento per il sistema Raspberry Pi OS
  • Arriva la prima point release di LibreOffice 7.4
  • Arriva un importante traguardo per Apache OpenOffice
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 20 September 2022 19.59

19 September 2022

Modifica del carica batteria Yaesu CD-41

Paolo Garbin (paolettopn)

Ogni volta che acquisto un apparato, che sia nuovo od usato, cerco di fare in modo che tutto funzioni al meglio, anche se (a volte) è necessario apportare delle piccole modifiche ai circuiti originali.
Questa volta ho acquistato un ottimo apparato "usato, ma tenuto molto bene", dall'amico Carlo IW6DSL. La radio è una Yaesu FT1DE in C4FM.

The post Modifica del carica batteria Yaesu CD-41 first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 19 September 2022 16.30

17 September 2022

Da qualche giorno il Gruppo Radio Firenze ha messo a disposizione dei radioamatori questo nuovo server IPSC2, che ci permette tutta una serie di nuove prove e sperimentazioni.

The post GRF – Il nuovo server IPSC2-DIG-ITALIA della rete DMR+ first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 17 September 2022 15.55

15 September 2022

In data 31 agosto 2022 è stato firmato il Decreto in oggetto, pubblicato in Gazzetta Ufficiale il 13 settembre 2022. Nel documento, ci sono alcune novità anche in campo radioamatoriale, per quanto riguarda l'assegnazione di statuto ordinario e/o secondario, oltre a qualche estensione di utilizzo delle frequenze.

The post Piano Nazionale di Ripartizione delle Frequenze del MISE – edizione 2022 first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 15 September 2022 12.02

13 September 2022

Categoria: 

È disponibile la newsletter N° 028/2022 della comunità di ubuntu-it. In questo numero:

  • Ubuntu Summit
  • Gimp Italia migra nel Forum linux.it
  • Firefox 104: ecco cosa c'è di nuovo
  • GNOME Shell mobile continua a prendere forma
  • LibreOffice 7.3.6 è disponibile per il download
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 13 September 2022 20.01

09 September 2022

Desiderando sempre continuare ad aiutare chi, come me, si sta addentrando in questo mondo di misure (non professionali) radioamatoriali, e continuando la collaborazione con il mio amico Antonio IZ0MXY, ho deciso ad aggiornare la versione di questo programma sui miei PC linux, in modo da avere una migliore visualizzazione, e una gestione dei comandi dello strumento più aggiornata.

The post Installazione dello strumento NanoVNA-Saver 0.5.1 su PC first appeared on Il mondo di Paolettopn (IV3BVK - K1BVK).
il 09 September 2022 15.44

06 September 2022

Categoria: 

È disponibile la newsletter N° 027/2022 della comunità di ubuntu-it. In questo numero:

  • Un primo sguardo a Ubuntu 22.10 che sarà alimentato dal Kernel Linux 5.19
  • Canonical rilascia per Ubuntu 22.04 e 20.04 LTS un nuovo aggiornamento di sicurezza
  • Full Circle Magazine Issue #183 in inglese
  • Full Circle Magazine Issue #184 in inglese
  • Risolto un fastidioso bug che affliggeva Firefox 104
  • Rilasciato LibreOffice 7.4
  • Google riapre il programma di Bug Bounty per i progetti open source
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 06 September 2022 19.57

27 August 2022

Helm is a remarkable piece of technology to manage your Kubernetes deployments, and used along Terraform is perfect for deploying following the GitOps strategy.

Terraform Helm CRDs manager

Illustration by unDraw+.

What’s GitOps? Great question! As this helpful, introductory article summarize, it is Infrastructure as Code, plus Merge Requests, plus Continuous Integration. Follow the link to explore further the concept.

However, Helm has a limitation: it doesn’t manage the lifecycle of Custom Resource Definitions (CRDs), meaning it will only install the CRD during the first installation of a chart. Subsequent chart upgrades will not add or remove CRDs, even if the CRDs have changed.

This can be a huge problem for a GitOps approach: having to update CRDs manually isn’t a great strategy, and makes it very hard to keep in sync with deployments and rollbacks.

For this very reason, I created a small Terraform module that will read from some online manifests of CRDs, and apply them. When parametrizing the version of the chart, it is simple to keep Helm Charts and CRDs in sync, without having to do anything manually.

Example

Karpenter is an incredible open-source Kubernetes node provisioner built by AWS. If you haven’t tried it yet, take some minutes to read about it.

Let’s use Karpenter as an example on how to use the module. We want to deploy the chart with the Helm provider, and we use this new Terraform module to manage its CRDs as well:

resource "helm_release" "karpenter" {
  name            = "karpenter"
  namespace       = "karpenter"
  repository      = "https://charts.karpenter.sh"
  chart           = "karpenter"
  version         = var.chart_version

  // ... All the other parameters necessary, skipping them here ...
}

module "karpenter-crds" {
  source  = "rpadovani/helm-crds/kubectl"
  version = "0.1.0"
  
  crds_urls = [
    "https://raw.githubusercontent.com/aws/karpenter/v${var.chart_version}/charts/karpenter/crds/karpenter.sh_provisioners.yaml",
    "https://raw.githubusercontent.com/aws/karpenter/v${var.chart_version}/charts/karpenter/crds/karpenter.k8s.aws_awsnodetemplates.yaml"
  ]
}

As you can see, we parametrize the version of the chart, so we can be sure to have the same version for CRDs as the Helm chart. Behind the curtains, Terraform will download the raw file, and apply it with kubectl. Of course, the operator running Terraform needs to have enough permissions to launch such scripts, so you need to configure the kubectl provider.

The URLs must point to just the Kubernetes manifests, and this is why we use the raw version of the GitHub URL.

The source code of the module is available on GitHub, so you are welcome to chime in and open any issue: I will do my best to address problems and implement suggestions.

Conclusion

I use this module in production, and I am very satisfied with it: it brings under GitOps the last part I missed: the CRDs. Now, my only task when I install a new chart is finding all the CRDs, and build a URL that contains the chart version. Terraform will take care of the rest.

I hope this module can be useful to you as it is to me. If you have any question, or feedback, or if you would like some help, please leave a comment below, or write me an email at hello@rpadovani.com.

Ciao,
R.

il 27 August 2022 00.00

12 July 2022

Categoria: 

È disponibile la newsletter N° 026/2022 della comunità di ubuntu-it. In questo numero:

  • Alcuni miglioramenti delle prestazioni del pacchetto snap di Firefox - Parte III
  • La newsletter va in vacanza 
  • Un tema GNOME su Firefox: ecco come fare!
  • Full Circle Magazine Issue #182 in inglese
  • Raspberry Pi Pico W: la tua piattaforma IoT da $ 6
  • Thunderbird 102: ecco cosa c'è di nuovo
  • Aggiornamenti di sicurezza
  • Bug riportati
  • Statistiche del gruppo sviluppo
  • Scrivi per la newsletter

 

Puoi leggere la newsletter in questa pagina, oppure scaricarla in formato pdf. Se hai perso i numeri precedenti, puoi trovarli nell archivio! Per ricevere la newsletter ogni settimana nella tua casella di posta elettronica, iscriviti alla lista newsletter-italiana.

il 12 July 2022 20.05

04 May 2022

Contributing to any open-source project is a great way to spend a few hours each month. I started more than 10 years ago, and it has ultimately shaped my career in ways I couldn’t have imagined!

GitLab logo as cover

The new GitLab logo, just announced on the 27th April 2022.

Nowadays, my contributions focus mostly on GitLab, so you will see many references to it in this blog post, but the content is quite generalizable; I would like to share my experience to highlight why you should consider contributing to an open-source project.

Writing blog posts, tweeting, and helping foster the community are nice ways to contribute to a project ;-)

And contributing doesn’t mean only coding: there are countless ways to help an open-source project: translating it to different languages, reporting issues, designing new features, writing the documentation, offering support on forums and StackOverflow, and so on.

Before deep diving in this wall of text, be aware there are mainly three parts in this blog post, after this introduction: a context section, where I describe my personal experience with open source, and what does it mean to me. Then, a nice list of reasons to contribute to any project. In closing, there are some tips on how to start contributing, both in general, and something specific to GitLab.

Context

Ten years ago, I was fresh out of high school, without (almost) any knowledge about IT: however, I found out that I had a massive passion for it, so I enrolled in a Computer Engineering university (boring!), and I started contributing to Ubuntu (cool!). I began with the Italian Local Community, and I soon moved to Ubuntu Touch.

I often considered rewriting that old article: however I have a strange attachment to it as it is, with all that English mistakes. One of the first blog posts I have written, and it was really well received! We all know how it ended, but it still has been a fantastic ride, with a lot of great moments: just take a look at the archive of this blog, and you can see the passion and the enthusiasm I had. I was so enthusiast, that I wrote a similar blog post to this one! I think it highlights really well some considerable differences 10 years make.

Back then, I wasn’t working, just studying, so I had a lot of spare time. My English was way worse. I was at the beginning of my journey in the computer world, and Ubuntu has ultimately shaped a big part of it. My knowledge was very limited, and I never worked before. Contributing to Ubuntu gave me a glimpse of real world, I met outstanding engineers who taught me a lot, and boosted my CV, helping me to land my first job.

Advocacy, as in this blog post, is a great way to contribute! You spread awareness, and this helps to find new contributors, and maybe inspiring some young student to try out! Since then, I completed a master’s degree in C.S., worked in different companies in three different countries, and became a professional. Nowadays, my contributions to open source are more sporadic (adulthood, yay), but given how much it meant to me, I am still a big fan, and I try to contribute when I can, and how I can.

Why contributing

Friends

During my years contributing to open-source software, I’ve met countless incredible people, with some of whom I’ve become friend. In the old blog post I mentioned David: in the last 9 years we stayed in touch, met in different occasions in different cities: last time was as recent as last summer. Back at the time, he was a Manager in the Ubuntu Community Team at Canonical, and then, he became Director of Community Relations at GitLab. Small world!

The Ubuntu Touch Community Team in Malta, in 2014

The Ubuntu Touch Community Team in Malta, in 2014. It has been an incredible week, sponsored by Canonical!

One interesting thing is people contribute to open-source projects from their homes, all around the world: when I travel, I usually know somebody living in my destination city, so I’ve always at least one night booked for a beer with somebody I’ve met only online; it’s a pleasure to speak with people from different backgrounds, and having a glimpse in their life, all united by one common passion.

Fun

Having fun is important! You cannot spend your leisure time getting bored or annoyed: contributing to open source is fun ‘cause you pick the problems you would like to work on, and you don’t need all that bureaucracy and meetings that is often needed in your daily job. You can be challenged, and feeling useful, and improving a product, without any manager on your shoulder, and with your pace.

Being up-to-date on how things evolve

For example, the GitLab Handbook is a precious collection of resources, ideas, and methodologies on how to run a 1000 people company in a transparent, full remote, way. It’s a great reading, with a lot of wisdom.

Contributing to a project typically gives you an idea on how teams behind it work, which technologies they use, and which methodologies. Many open-source projects use bleeding-edge technologies, or draw a path. Being in contact with new ideas is a great way to know where the industry is headed, and what are the latest news: it is especially true if you hang out in the channels where the community meets, being them Discord, forums, or IRC (well, IRC is not really bleeding-edge, but it is fun).

Learning

When contributing in an area that doesn’t match your expertise, you always learn something new: reviews are usually precise and on point, and projects of a remarkable size commonly have a coaching team that help you to start contributing, and guide you on how to land your first patches.

In GitLab, if you need a help in merging your code, there are the Merge Request Coaches! And for any type of help, you can always join Gitter, or ask on the forum, or write to the dedicated email address.

Feel also free to ping me directly if you want some general guidance!

Giving back

I work as a Platform Engineer. My job is built on an incredible amount of open-source libraries, amazing FOSS services, and I basically have just to glue together different pieces. When I find some rough edge that could be improved, I try to do so.

Nowadays, I find crucial having well-maintained documentation, so after I have achieved something complex, I usually go back and try to improve the documentation, where lacking. It is my tiny way to say thanks, and giving back to a world that really has shaped my career.

This is also what mostly of my blogs posts are about: after having completed something on which I spent fatigue on, I find it nice being able to share such information. Every so often, I find myself years later following my guide, and I really also appreciate when other people find the content useful.

Swag

Who doesn’t like swag? :-) Numerous projects have delightful swags, starting from stickers, that they like to share with the whole community. Of course, it shouldn’t be your main driver, ‘cause you will soon notice that it is ultimately not worth the amount of time you spend contributing, but it is charming to have GitLab socks!

A GitLab branded mechanical keyboard

A GitLab branded mechanical keyboard, courtesy of the GitLab's security team! This very article has been typed with it!

Tips

I hope I inspired you to contribute to some open-source project (maybe GitLab!). Now, let’s talk about some small tricks on how to begin easily.

Find something you are passionate about

You must find a project you are passionate about, and that you use frequently. Looking forward to a release, knowing that your contributions will be included, it is a wonderful satisfaction, and can really push you to do more.

Moreover, if you already know the project you want to contribute to, you probably know already the biggest pain points, and where the project needs some contributions.

Start small and easy

You don’t need to do gigantic contributions to begin. Find something tiny, so you can get familiar with the project workflows, and how contributions are received.

Launchpad and bazaar instead of GitLab and git — down the memory lane! My journey with Ubuntu started correcting a typo in a README, and here I am, years later, having contributed to dozens of projects, and having a career in the C.S. field. Back then, I really had no idea of what my future would have held.

For GitLab, you can take a look at the issues marked as “good for new contributors”. They are designed to be addressed quickly, and onboard new people in the community. In this way, you don’t have to focus on the difficulties of the task at hand, but you can easily explore how the community works.

Writing issues is a good start

Writing high-quality issues is a great way to start contributing: maintainers of a project are not always aware of how the software is used, and cannot be aware of all the issues. If you know that something could be improved, write it down: spend some time explicating what happens, what you expect, how to reproduce the problem, and maybe suggest some solutions as well! Perhaps, the first issue you write down could be the very first issue you resolve.

Not much time required!

Contributing to a project doesn’t require necessarily a lot of time. When I was younger, I definitely dedicated way more time to open-source projects, implementing gigantic features. Nowadays, I don’t do that anymore (life is much more than computers), but I like to think that my contributions are still useful. Still, I don’t spend more than a couple of hours a month, based on my schedule, and how much is raining (yep, in winter I definitely contribute more than in summer).

GitLab is super easy

Do you use GitLab? Then you should undoubtedly try to contribute to it. It is easy, it is fun, and there are many ways. Take a look at this guide, hang out on Gitter, and see you around. ;-)

Next week (9th-13th May 2022) there is also a GitLab Hackathon! It is a real fun and easy way to start contributing: many people are available to help you, there are video sessions talking about contributing, and just doing a small contribution you will receive a pretty prize.

And if I was able to do it with my few contributions, you can as well! And in time, if you are consistent in your contributions, you can become a GitLab Hero! How cool is that?

I really hope this wall of text made you consider contributing to an open-source project. If you have any question, or feedback, or if you would like some help, please leave a comment below, or write me an email at hello@rpadovani.com.

Ciao,
R.

il 04 May 2022 00.00

19 March 2022

Rust is all hot these days, and it is indeed a nice language to work with. In this blog post, I take a look at a small challenge: how to host private crates in the form of Git repositories, making them easily available both to developers and CI/CD systems.

cover

Ferris the crab, unofficial mascot for Rust.

A Rust crate can be hosted in different places: on a public registry on crates.io, but also in a private Git repo hosted somewhere. In this latter case, there are some challenges on how to make the crate easily accessible to both engineers and CI/CD systems.

Developers usually authenticate through SSH keys: given humans are terrible at remembering long passwords, using SSH keys allow us to do not memorize credentials, and having authentication methods that just works. The security is quite high as well: each device has a unique key, and if a device gets compromised, deactivating the related key solves the problem.

On the other hand, it is better avoiding using SSH keys for CI/CD systems: such systems are highly volatile, with dozens, if not hundreds, instances that get created and destroyed hourly. For them, a short-living token is a way better choice. Creating and revoking SSH keys on the fly can be tedious and error-prone.

This arises a challenge with Rust crates: if hosted on a private repository, the dependency can be reachable through SSH, such as

[dependencies]
my-secret-crate = { git = "ssh://git@gitlab.com/rpadovani/my-secret-crate.git", branch = "main" }

or through HTTPS, such as

[dependencies]
my-secret-crate = { git = "https://gitlab.com/rpadovani/my-secret-crate", branch = "main" }

In the future, I hope we don’t need to host private crates on Git repositories, GitLab should add a native implementation of a private registry for crates.

The former is really useful and simple for engineers: authentication is the same as always, so no need to worry about it. However, it is awful for CI/CD: now there is need to manage the lifecycle of SSH keys for automatic systems.

The latter is awful for engineers: they need a new additional authentication method, slowing them down, and of course there will be authentication problems. On the other hand, it is great for automatic systems.

How to conciliate the two worlds?

Well, let’s use them both! In the Cargo.toml file, use the SSH protocol, so developers can simply clone the main repo, and they will be able to clone the dependencies without further hassle.

Then, configure the CI/CD system to clone every dependency through HTTPS, thanks to a neat feature of Git itself: insteadOf.

From the Git-SCM website:

url.<base>.insteadOf:
Any URL that starts with this value will be rewritten to start, instead, with . In cases where some site serves a large number of repositories, and serves them with multiple access methods, and some users need to use different access methods, this feature allows people to specify any of the equivalent URLs and have Git automatically rewrite the URL to the best alternative for the particular user, even for a never-before-seen repository on the site. When more than one insteadOf strings match a given URL, the longest match is used.

Basically, it allows rewriting part of the URL automatically. In this way, it is easy to change the protocol used: developers will be happy, and the security team won’t have to scream about long-lived credentials on CI/CD systems.

Do you need an introduction to GitLab CI/CD? I’ve written something about it!

An implementation example using GitLab, but it can be done on any CI/CD system:

my-job:
  before_script:
    - git config --global credential.helper store
    - echo "https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com" > ~/.git-credentials
    - git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com".insteadOf ssh://git@gitlab.com
  script:
    - cargo build

The CI_JOB_TOKEN is a unique token valid only for the duration of the GitLab pipeline. In this way, also if a machine got compromised, or logs leaked, the code is still sound and safe.

What do you think about Rust? If you use it, have you integrated it with your CI/CD systems? Share your thoughts in the comments below, or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

il 19 March 2022 00.00

15 December 2021

AWS EKS is a remarkable product: it manages Kubernetes for you, letting you focussing on creating and deploying applications. However, if you want to manage permissions accordingly to the shared responsibility model, you are in for some wild rides.

cover

Image courtesy of unDraw.

The shared responsibility model

First, what’s the shared responsibility model? Well, to design a well-architected application, AWS suggests following six pillars. Among these six pillars, one is security. Security includes sharing responsibility between AWS and the customer. In particular, and I quote,

Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

Beautiful, isn’t it? AWS gives us a powerful tool, IAM, to manage permissions; we have to configure things in the best way, and AWS gives us the way to do so. Or does it? Let’s take a look together.

Our goal

I would say the goal is simple, but since we are talking about Kubernetes, things cannot be just simple.

Our goal is quite straightforward: setting up a Kubernetes cluster for our developers. Given that AWS offers AWS EKS, a managed Kubernetes service, we only need to configure it properly, and we are done. Of course, we will follow best practices to do so.

A proper setup

Infrastructure as code is out of scope for this post, but if you have never heard about it before, I strongly suggest taking a look into it.

Of course, we don’t use the AWS console to manually configure stuff, but Infrastructure as Code: basically, we will write some code that will call the AWS APIs on our behalf to set up AWS EKS and everything correlated. In this way, we can have a reproducible setup that we could deploy in multiple environments, and countless other advantages.

Moreover, we want to avoid launching scripts that interact with our infrastructure from our PC: we prefer not to have permissions to destroy important stuff! Separation of concerns is a fundamental, and we want to write code without worrying about having consequences on the real world. All our code should be vetted from somebody else through a merge request, and after being approved and merged to our main branch, a runner will pick it up and apply the changes.

A runner is any CI/CD that will execute the code on your behalf. In my case, it is a GitLab runner, but it could be any continuous integration system.

We are at the core of the problem: our runner should follow the principle of least privilege: it should be able to do only what it needs to do, and nothing more. This is why we will create a IAM role only for it, with only the permissions to manage our EKS cluster and everything in it, but nothing more.

I would have a massive rant about how bad is the AWS documentation for IAM in general, not only for EKS, but I will leave it to some other day.

The first part of creating a role with minimum privileges is, well, understanding what minimum means in our case. A starting point is the AWS documentation: unfortunately, it is always a bad starting point concerning IAM permissions, ‘cause it is always too generous in allowing permissions.

The “minimum” permission accordingly to AWS

According to the guide, the minimum permissions necessary for managing a cluster is being able to do any action on any EKS resource. A bit farfetched, isn’t it?

Okay, hardening this will be fun, but hey, do not let bad documentations get in the way of a proper security posture.

You know what will get in the way? Bugs! A ton of bugs, with absolutely useless error messages.

I started limiting access to only the namespace of the EKS cluster I wanted to create. I ingenuously thought that we could simply limit access to the resources belonging to the cluster. But, oh boy, I was mistaken!

Looking at the documentation for IAM resources and actions, I created this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:ListClusters",
                "eks:DescribeAddonVersions",
                "eks:CreateCluster"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "eks:*",
            "Resource": [
                "arn:aws:eks:eu-central-1:123412341234:addon/my-cluster/*/*",
                "arn:aws:eks:eu-central-1:123412341234:fargateprofile/my-cluster/*/*",
                "arn:aws:eks:eu-central-1:123412341234:identityproviderconfig/my-cluster/*/*/*",
                "arn:aws:eks:eu-central-1:123412341234:nodegroup/my-cluster/*/*",
                "arn:aws:eks:eu-central-1:123412341234:cluster/my-cluster"
            ]
        }
    ]
}

Unfortunately, if a role with these permissions try to create a cluster, this error message appears:

Error: error creating EKS Add-On (my-cluster:kube-proxy): AccessDeniedException: User: arn:aws:sts::123412341234:assumed-role/<role>/<iam-user> is not authorized to perform: eks:TagResource on resource: arn:aws:eks:eu-central-1:123412341234:/createaddon

I have to say that at least the error message gives you a hint: the /createddon action is not scoped to the cluster.

After fighting with different polices for a while, I asked DuckDuckGo for a help, and indeed somebody reported this problem to AWS before, in this GitHub issue.

What the issue basically says is that if we want to give an IAM role permission to manage an add-on inside a cluster, we must give it permissions over all the EKS add-ons in our AWS account.

This of course breaks the AWS shared responsibility principle, ‘cause they don’t give us the tools to upheld our part of the deal. This is why it is a real and urgent issue, as they also mention in the ticket:

Can’t share a timeline in this forum, but it’s a high priority item.

And indeed it is so high priority, that it has been reported the 3rd December 2020, and today, more than one year later, the issue is still there.

To add insult to the injury, you have to write the right policy manually because if you use the IAM interface to select “Any resource” for the add-ons as in the screenshot below, it will generate the wrong policy! If you check carefully, the generated resource name is arn:aws:eks:eu-central-1:123412341234:addon/*/*/*, which of course doesn’t match the ARN expected by AWS EKS. Basically, also if you are far too permissive, and you use the tools that AWS provides you, you still will have some broken policy.

generate-addons-policy

Do you have some horror story about IAM yourself? I have a lot of them, and I am thinking about a more general post. What do you think? Share your thoughts in the comments below, or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

il 15 December 2021 10.00

19 August 2021

Terraform is a powerful tool. However, it has some limitations: since it uses AWS APIs, it doesn’t have a native way to check if an EC2 instance has completed to run cloud-init before marking it as ready. A possible workaround is asking Terraform to SSH on the instance, and wait until it is able to perform a connection before marking the instance as ready.

cover

Terraform logo, courtesy of HashiCorp.

I find using SSH in Terraform quite problematic: you need to distribute a private SSH key to anybody that will launch the Terraform script, including your CI/CD system. This is a no-go for me: it adds the complexity to manage SSH keys, including their rotation. There is a huge issue on the Terraform repo on GitHub about this functionality, and the most voted solution is indeed connecting via SSH to run a check:

provisioner "remote-exec" {
  inline = [
    "cloud-init status --wait"
  ]
}

AWS Systems Manager Run Command

The idea of using cloud-init status --wait is indeed quite good. The only problem is how do we ask Terraform to run such command. Luckily for us, AWS has a service, AWS SSM Run Command that allow us to run commands on an EC2 instance through AWS APIs! In this way, our CI/CD system needs only an appropriate IAM role, and a way to invoke AWS APIs. I use the AWS CLI in the examples below, but you can adapt them to any language you prefer.

Prerequisites

If you don’t know AWS SSM yet, go and take a look to their introductory guide. There are some prerequisites to use AWS SSM Run Command: we need to have AWS SSM Agent installed on our instance. It is preinstalled on Amazon Linux 2 and Ubuntu 16.04, 18.04, and 20.04. For any other OS, we need to install it manually: it is supported on Linux, macOS, and Windows.

The user or the role that executes the Terraform code needs to be able to create, update, and read AWS SSM Documents, and run SSM commands. A possible policy could be look like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1629387563127",
      "Action": [
        "ssm:CreateDocument",
        "ssm:DeleteDocument",
        "ssm:DescribeDocument",
        "ssm:DescribeDocumentParameters",
        "ssm:DescribeDocumentPermission",
        "ssm:GetDocument",
        "ssm:ListDocuments",
        "ssm:SendCommand",
        "ssm:UpdateDocument",
        "ssm:UpdateDocumentDefaultVersion",
        "ssm:UpdateDocumentMetadata"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

If we already know the name of the documents, or the instances where we want to run the commands, it is better to lock down the policy specifying the resources, accordingly to the principle of least privilege.

Last but not least, we need to have the AWS CLI installed on the system that will execute Terraform.

The Terraform code

After having set up the prerequisites as above, we need two different Terraform resources. The first will create the AWS SSM Document with the command we want to execute on the instance. The second one will execute such command while provisioning the EC2 instance.

The AWS SSM Document code will look like this:

resource "aws_ssm_document" "cloud_init_wait" {
  name = "cloud-init-wait"
  document_type = "Command"
  document_format = "YAML"
  content = <<-DOC
    schemaVersion: '2.2'
    description: Wait for cloud init to finish
    mainSteps:
    - action: aws:runShellScript
      name: StopOnLinux
      precondition:
        StringEquals:
        - platformType
        - Linux
      inputs:
        runCommand:
        - cloud-init status --wait
    DOC
}

We can refer such document from within our EC2 instance resource, with a local provisioner:

resource "aws_instance" "example" {
  ami           = "my-ami"
  instance_type = "t3.micro"

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]

    command = <<-EOF
    set -Ee -o pipefail
    export AWS_DEFAULT_REGION=${data.aws_region.current.name}

    command_id=$(aws ssm send-command --document-name ${aws_ssm_document.cloud_init_wait.arn} --instance-ids ${self.id} --output text --query "Command.CommandId")
    if ! aws ssm wait command-executed --command-id $command_id --instance-id ${self.id}; then
      echo "Failed to start services on instance ${self.id}!";
      echo "stdout:";
      aws ssm get-command-invocation --command-id $command_id --instance-id ${self.id} --query StandardOutputContent;
      echo "stderr:";
      aws ssm get-command-invocation --command-id $command_id --instance-id ${self.id} --query StandardErrorContent;
      exit 1;
    fi;
    echo "Services started successfully on the new instance with id ${self.id}!"

    EOF
  }
}

From now on, Terraform will wait for cloud-init to complete before marking the instance ready.

Conclusion

AWS Session Manager, AWS Run Commands, and the others tools in the AWS Systems Manager family are quite powerful, and in my experience they are not widely use. I find them extremely useful: for example, they also allows connecting via SSH to the instances without having any port open, included the 22! Basically, they allow managing and running commands inside instances only through AWS APIs, with a lot of benefits, as they explain:

Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to your managed instances.

DO you have any questions, feedback, critics, request for support? Leave a comment below, or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

il 19 August 2021 18.30

17 August 2021

Adding comments to the blog

Riccardo Padovani

After years of blogging, I’ve finally chosen to add a comment system, including reactions, to this blog. I’ve done so to make it easier engaging with the four readers of my blabbering: of course, it took some time to choose the right comment provider, but finally, here we are!

cover

A picture I took at the CCC 2015 - and the moderation policy for comments!

I’ve chosen, long ago, to do not put any client-side analytics on this website: while the nerd in me loves graph and numbers, I don’t think they are worthy of the loss of privacy and performance for the readers. However, I am curious to have feedback on the content I write, if some of them are useful, and how can I improve. In all these years, I’ve received different emails about some posts, and they are heart-warming. With comments, I hope to reduce the friction in communicating with me and having some meaningful interaction with you.

Enter hyvor.com

Looking for a comment system, I had three requirements in mind: being privacy-friendly, being performant, and being managed by somebody else.

A lot of comments system are “free”, because they live on advertisements, or tracking user activities, and more often a combination of both. However, since I want comments on my website, I find dishonest that you, the user, has to pay the price for it. So, I was looking for something that didn’t track users, and bases its business on dear old money. My website, my wish, my wallet.

I find performances important, and unfortunately, quite undervalued on the bloated Internet. I like having just some HTML, some CSS, and the minimum necessary of JavaScript, so whoever stumbles on these pages don’t waste CPU and time waiting for some rendering. Being a static website, there isn’t a server side, so I cannot put comments there. I had to find a really light JavaScript based comment system.

Given these two prerequisites, you would say the answer is obvious: find a comment system that you can self-host! And you would be perfectly right - however, since I spend already 8 hours a day keeping stuff online, I really don’t want to have to care about performance and uptime in my free time - I definitely prefer going drinking a beer with a friend.

After some shopping around, I’ve chosen to go with Hyvor Talk, since it checks all three requirements above. I’ve read nice things about it, so let’s see how it goes! And if you don’t see comments at the end of the page, probably a privacy plugin for your browser is blocking it - up to you if you want to whitelist my website, or communicate with me in other ways ;-)

A nice plus of Hyvor is they also support reactions, so if you are in an hurry but want still to leave a quick feedback on the post, you can simply click a button. Fancy, isn’t it?

Moderation

Internet can be ugly sometime, and this is why I will keep a strict eye on the comments, and I will probably adjust moderation settings in the future, based on how things evolve - maybe no-one will comment, then no need for any strict moderation! The only rule I ask you to abide, and I’ve put as moderation policy, is: “Be excellent to each other”. I’ve read it at the CCC Camp 2015, and it sticks with me: as every short sentence, cannot capture all the nuances of human interaction, but I think it is a very solid starting point. If you have any concern or feedback you prefer to do not express in public, feel free to reach me through email. Otherwise, I hope to see you all in the comment section ;-)

Questions, comments, feedback, critics, suggestions on how to improve my English? Leave a comment below, or drop me an email at riccardo@rpadovani.com. Or, from today, leave a comment below!

Ciao,
R.

il 17 August 2021 22.00

15 August 2021

“Build smaller, faster, and more secure desktop applications with a web frontend” is the promise made by Tauri. And indeed, it is a great Electron replacement. But being in its first days (the beta has just been released!) a bit of documentation is still missing, and on the internet there aren’t many examples on how to write code.

cover

Tauri is very light, as highlighted by these benchmarks.

Haven’t you started with Tauri yet? Go and read the intro doc! One thing that took me a bit of time to figure out, is how to read environment variables from the frontend of the application (JavaScript / Typescript or whichever framework you are using).

The Command

As usual, when you know the solution the problem seems easy. In this case, we just need an old trick: delegating!

In particular, we will use the Command API to spawn a sub-process (printenv) and reading the returned value. The code is quite straightforward, and it is synchronous:

import { Command } from "@tauri-apps/api/shell";

async readEnvVariable(variableName: string): Promise<string> {
    const commandResult = await new Command(
        "printenv",
        variableName
    ).execute();

    if (commandResult.code !== 0) {
      throw new Error(commandResult.stderr);
    }
    
    return commandResult.stdout;
}

The example is in Typescript, but can be easily transformed in standard JavaScript.

The Command API is quite powerful, so you can read its documentation to adapt the code to your needs.

Requirements

There are another couple of things you should consider: first, you need to have installed in your frontend environment the @tauri-apps/api package. You also need to enable your app to execute commands. Since Tauri puts a strong emphasis on the security (and rightly so!), your app is by default sandboxed. To being able to use the Command API, you need to enable the shell.execute API.

It should be as easy as setting tauri.allowlist.shell.execute to true in your tauri.json config file.

Tauri is very nice, and I really hope it will conquer the desktops and replace Electron, since it is lighter, faster, and safer!

Questions, comments, feedback, critics, suggestions on how to improve my English? Leave a comment below, or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

il 15 August 2021 20.00

26 July 2021

Want to quit from a conversation with a bot provided through Dialogflow? You can use an Italian city, too.

Imagine that you go to the registry office.

The employee has to ask different questions, like your name, birthday, and so on.

He just asked the first one, you expect a lot more questions.

Second one 

BOT: “What’s your first name?”

USER: “John”

BOT: “Sorry to hear that. Bye. Next, please!”

This is what I am going to tell you in this story, where our main character is not the employee but Google Dialogflow.

BOT: Where are you from?

USER: New York City

BOT: Sorry to hear this, bye!

Wait, what?

Imagine that a bot is asking a simple question “where are you from?”. You insert the correct value and BOOOM. The bot suddenly closes the conversation!

Overview

TIP: familiar with Dialogflow? Start from Conversation exits!

We use Google Dialogflow to provide a rich conversation experience to our customers, driving them through different Intents according to some data that we need them to submit and/or modify.

google.com – Intents

To give context to someone not used to Dialogflow, an Intent categorizes an end-user’s intention for one conversation turn. For each agent, you define many intents, where your combined Intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression, Dialogflow matches the end-user expression to the best intent in your agent. 

When an intent is matched at runtime, Dialogflow provides the extracted values from the end-user expression as parameters. 

Each parameter is described by different attributes, forming a parameter configuration. For this article, let’s say that we are interested at some point in the conversation in a parameter called city that has an Entity type @sys.geo-city.

That’s it. We ask for a parameter called city and we expect our user to write a city in his response:

“Los Angeles”

“I live in Los Angeles”

“I was in Los Angeles”

are all valid sentences for usLos Angeles is extracted from the sentence and assigned to the parameter city.

Let’s finish this paragraph with the definition of Context: Dialogflow contexts are similar to natural language context. If a person says to you “they are orange”, you need context in order to understand what the person is referring to.

Conversation exits

Dialogflow has different features automatically provided that help you to create a rich conversation without having to reinvent the wheel.

System entities are one of these features.

Another important one is something called Conversation exits.

Some magic words will stop the conversation:

  • “exit”
  • “cancel”
  • “stop”

It’s impossible to avoid that this behavior happens, you can just apply some custom logic and reply one last time with a custom response.

Intent, parameters, conversation exits: a recap.

So, we are in a specific Intent, asking the user to give us a city name for our city parameter. At the same time, we know that some magic words will stop the conversation.

The Italian city that stops the conversation

Now we have all the elements to introduce the problem described in the article title. Our clients are from Italy, so the chat is in Italian, and 99.99% of the time the user is going to indicate an Italian city to our question.

One day, we had a support request from one of our clients: “When I insert the city where I live, the system replies with “OK, canceled. Goodbye.”.

-.-‘.

Open the logs.

Open the Dialogflow console.

Check the history.

Oh, wait, here it is! The city is called Fermoand “Fermo!” means “Stop!”.

BOT: Can you please tell me where XYZ happened?

USER: Fermo

BOT: Sure, goodbye.

The user would be something like -.-‘ .

What the Dialogflow Support team told us about it

We tried to reach out to the Dialogflow support team.

This is the most important part of the email:

Conversational exits for Actions on Google(AoG) are implemented on the AoG app’s side and cannot be overridden by Dialogflow.

Unfortunately, our team is unable to assist as your question is more closely related to Actions on Google

What the Actions on Google Support Team told us about it

We can understand that the word “Fermo” is in reference to a city. However, it is also a system level hotword. Unfortunately there is no solution to bypass the system limitation. Please consider maybe using Fermo city / town or something along those lines, however that may not be guaranteed since it is a system hotword. 

Read it again:

there is no solution to bypass the system limitation

Whaaaaat?? What we did

We provide the Dialogflow experience through the PHP Client library.

We created a list, that right now has only Fermo in it, of potentially dangerous cities.

When the user provides a single word response and we are in an Intent where we are asking for a city to extract a value for our @sys.geo-city parameter, we check if this word is in the list.

If so, we wrap the city in a context: “I live in <word>” before sending the response to Dialogflow.

There is no guarantee that this works, according to AoG support team response, but currently, it seems to work just fine.

The post The Italian city that STOPs Google Dialogflow appeared first on L.S..

il 26 July 2021 07.26

26 March 2021

The Vision API now supports online (synchronous) small batch annotation (PDF/TIFF/GIF) for all features. To do so, the relevant documentation is Small batch file annotation online.

Let’s see how can we do this with PHP.

Context

Having PHP >= 7.4, the packages to require are:

google/cloud-vision
google/cloud-storage

Code

How to upload the file in the storage

Soon.

Text detection

Even with PDFs we are going to use ImageAnnotatorClient, the service that performs Google Cloud Vision API detection tasks over client images and returns detected entities from the images.

$path = "gs://mystorage.com/path/to/my/file.pdf"; /* If you have it, you can give an hint about the language in the doc */ $context = new ImageContext(); $context->setLanguageHints(['it']); /* Here's the annotator described before */ $imageAnnotator = new ImageAnnotatorClient(); /* We create an AnnotateFileRequest instance to annotate one single file */ $file_request = new AnnotateFileRequest(); /* We express our input file in terms of a GcsSource instance the represents the Google Cloud Storage location */ $gcs_source = (new GcsSource()) ->setUri($path); /* Let's specify the feature we need. You can find the options below */ $feature = (new Feature()) ->setType(Type::DOCUMENT_TEXT_DETECTION); /* Let's specify the file info: a PDF in that location */ $input_config = (new InputConfig()) ->setMimeType('application/pdf') ->setGcsSource($gcs_source); /* Some configurations, including the pages of the file to perform image annotation. */ $file_request = $file_request->setInputConfig($input_config) ->setFeatures([$feature]) ->setPages([1]); /* Annotate the files and get the responses making the synchronous batch request. */ $result = $imageAnnotator->batchAnnotateFiles([$file_request]); /* We take the first result, because that's 1 page only. */ $res = $result->getResponses(); $offset = $res->offsetGet(0); $responses = $offset->getResponses(); $res = $responses[0]; /* Finally!!! The annotations! */ $annotations = $res->getFullTextAnnotation(); /* Clean up resources such as threads */ $imageAnnotator->close();

Features

In your request you can set the type of annotation you want to perform on the file. You can check the reference or the features list documentation.

Some examples are:

  • Face detection
  • Landmark detection
  • Logo detection
  • Label detection
  • Text and document text detection
  • ..

The post Google Vision: detect text in PDFs synchronously with PHP appeared first on L.S..

il 26 March 2021 15.53

14 February 2021

JetBrains Qodana is a new product, still in early access, that brings the “Smarts” of JetBrains IDEs into your CI pipeline, and it can be easily integrated in GitLab.

cover

Qodana on Gitlab Pages

In this blog post we will see how to integrate this new tool by JetBrains in our GitLab pipeline, including having a dedicated website to see the cool report it produces.

Aren’t you convinced yet? Read 4 Benefits of CI/CD! If you don’t have a proper GitLab pipeline to lint your code, run your test, and manage all that other annoying small tasks, you should definitely create one! I’ve written an introductory guide to GitLab CI, and many more are available on the documentation website.

JetBrains Qodana

Qodana comprises two main parts: a nicely packaged GUI-less IntelliJ IDEA engine tailored for use in a CI pipeline as a typical “linter” tool, and an interactive web-based reporting UI. It makes it easy to set up workflows to get an overview of the project quality, set quality targets, and track progress on them. You can quickly adjust the list of checks applied for the project and include or remove directories from the analysis.

Qodana launching post.

I’m a huge fun of JetBrains products, and I happily pay their license every year: my productivity using their IDEs is through the roof. This very blog post has been written using WebStorm :-)

Therefore, when they announced a new product to bring the smartness of their IDE on CI pipelines, I was super enthusiastic! In their documentation there is also a small paragraph about GitLab, and we’ll improve the example to have a nice way to browse the output.

At the moment, Qodana has support for Java, Kotlin, and PHP. With time, Qodana will support all languages and technologies covered by JetBrains IDEs.

Remember: Qodana is in an early access version. Using it, you expressly acknowledge that the product may not be reliable, work as not intended, and may contain errors. Any use of the EAP product is at your own risk.

I suggest you to also taking a look to the official GitHub page of the project, to see licenses and issues.

Integrating Qodana in GitLab

The basic example provided by JetBrains is the following:

qodana:
  image: 
    name: jetbrains/qodana
    entrypoint: [sh, -c]
  script:
    - /opt/idea/bin/entrypoint --results-dir=$CI_PROJECT_DIR/qodana --save-report --report-dir=$CI_PROJECT_DIR/qodana/report
  artifacts:
    paths:
      - qodana

While this works, it doesn’t provide a way to explore the report without firstly downloading it on your PC. If we have GitLab Pages enabled, we can publish the report and explore it online, thanks to the artifacts:expose_as keyword.

We also need GitLab to upload the right directory to Pages, so we change the artifact path as well:

qodana:
  image: 
    name: jetbrains/qodana
    entrypoint: [sh, -c]
  script:
    - /opt/idea/bin/entrypoint --results-dir=$CI_PROJECT_DIR/qodana --save-report --report-dir=$CI_PROJECT_DIR/qodana/report
  artifacts:
    paths:
      - qodana/report/
    expose_as: 'Qodana report'

Now in our merge requests page we have a new button, as soon as the Qodana job finishes, to explore the report! You can see such a merge request here, with the button “View exposed artifact”, while here you can find an interactive online report, published on GitLab Pages!

Configuring Qodana

Full reference for the Qodana config file can be found on GitHub. Qodana can be easily customized: we only need to create a file called qodana.yaml, and enter our preferences!

There are two options I find extremely useful: one is exclude, that we can use to skip some checks, or to skip some directories, so we can focus on what is important and save some time. The other is failThreshold. When this number of problems is reached, the container would exit with error 255. In this way, we can fail the pipeline, and enforce a high quality of the code!

Qodana shows already a lot of potential, also if it only at the first version! I am really looking forward to support for other languages, and to improvements that JetBrains will do in the upcoming releases!

Questions, comments, feedback, critics, suggestions on how to improve my English? Leave a comment below, or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

il 14 February 2021 09.00

06 February 2021

700!

Ubuntu-it Gruppo Doc

Categoria: 

Mavvvieeeeeniiii!!!! 🎉️🎊️🥂️🍾️

 

Perché tanto gaudio?

Semplice! Lo scorso Gennaio il wiki ha superato il traguardo delle 700 guide fra pagine create e ideate da Giugno 2011 a oggi 😊️

L'onore di tagliare tale traguardo è andato (anche giustamente) al nostro "uragano" wilecoyote, che arrivato da appena un anno ha messo in campo un impegno incredibile, rivelatosi provvidenziale per il proseguo del gruppo.

Tabella & conteggi

E andiamo per difetto, perché la nostra tabella non conteggia la miriade di piccole modifiche quotidiane o i ciclici e corposi lavori di routine fatti a ogni rilascio. Di fatto la pagina esiste per tenere facilmente traccia dei lavori svolti, il conteggio è più uno sfizio.

 

 

Già... il conteggio. Siamo pur sempre in ambito informatico e vi sarete chiesti quale stack utilizzassimo per effettuare questi calcoli statistici. Beh, cosa potremmo mai utilizzare se non lui... gnome-calculator! Implacabile e infallibile nell'eseguire somme su numeri naturali 😎️

 

Un grazie e prossimo traguardo...

Ma torniamo al wiki e soprattutto un pensiero a tutto il parco utenti. Da chi si è accorto e ha corretto piccoli errori a chi si è scervellato a scrivere guide complesse: un grazie di cuore all'impegno di tutti!

A Giugno ci sarà il decennale di questo ormai non più nuovo corso e sarà l'occasione per ripercorrere le tappe e le evoluzioni che bene o male, nonostante molte difficoltà, ci hanno permesso di arrivare qui e poter gridare: 700! 😀️

 

A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 06 February 2021 12.24

27 January 2021

Categoria: 

Ad Aprile cesserà il supporto per  Ubuntu 16.04.
Questo vuol dire che nella sezione contenente i resoconti delle installazioni di Ubuntu sui computer portatili, molti dei link alle guide passeranno nella colonna Guide da aggiornare.

Fortunatamente i computer che funzionano con Ubuntu 16.04 supportano anche le versioni successive, come Ubuntu 18.04 e 20.04 (nella maggior parte dei casi).
Sul tuo portatile hai continuato a installare versioni successive alla 16.04? Aggiornare la relativa pagina sarà davvero semplice ;)

 

Come aggiornare la pagina?

Quando si accede all'editor per modificare una pagina, si noterà nella parte in alto la macro Informazioni che si presenta in questo modo:
<<Informazioni(forum="..."; rilasci="15.10 16.04";)>>

Al suo interno è presente la voce rilasci che contiene tra virgolette le versioni con le quali la guida è stata testata. Bene, non occorre far altro che aggiungere il numero di versione di uno dei rilasci di Ubuntu attualmente supportati.

Pertanto, supponendo di aver testato con successo la guida con... ad esempio la 20.04, basta aggiungere il numero all'interno della macro che diviene:
<<Informazioni(forum="..."; rilasci="15.10 16.04 20.04";)>>

Niente di che, vero? ;)

 

In generale è buona cosa...

Il discorso può estendersi anche alle pagine già presenti nella colonna delle guide da aggiornare e, ancora più in generale, a qualsiasi tipo di guida presente nel wiki. Come vedi se si incappa in una pagina testata con una versione obsoleta di Ubuntu, per confermarne la validità con una versione supportata è una questione di attimi.

 

A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 27 January 2021 10.49

03 January 2021

What you should know if you are reading this

Our goal is to have a look at the mobile app development technology for startups in 2021, in order to publish an app for Android and an app for iOS with the perfect trade-off between quality and time+resources. How do we choose the best technology to accomplish this task?

First of all, the most accurate answer is:


it depends.

Every startup has different needs, in terms of business, technology, resources (how many developers? and how many of them are senior?) and deadlines.

Assumptions

There are 200000 articles out there about this subject.

This is why I want to list some boundaries that will “limit” our research. So this is not a general article Native vs Cross platform but is a study for a specific case.

That said, let’s start with these assumptions:

  • The start-up technological stack has a very strong connection with Google, mostly with Firebase. This is a very important aspect. In the start-up X the most important technology could be Y, so adjust the content of the article accordingly;
  • The start-up was providing all the services in web pages only, and the current app is basically a Webview showing an ad-hoc version of the website. I know, I know! Anyway, there is an ongoing process to transform all the services into a full set of (modularised) REST APIs that we can then consume in our app;
  • The app does not have heavy processes in terms of device resources, does not do any image/video processing and so on. Anyway, it has to have reliable access to the camera and retrieve the accurate users position through GPS;
  • The current technology is React Native, so we already saw both the pros and cons of a not-native approach. But not-native includes a world of solutions, that we will try to consider in this article;
  • The strong connection between React and Firebase, to build business web applications in JavaScript with all the power of the Firebase services, is very well explained in a book that I strongly suggest;
  • last but not least, I am not vertical on mobile technologies, so your feedbacks are very welcome.

Technologies in this study

I like a lot the category names reported in a very interesting article by Ionic: Comparing Cross-Platform Frameworks.

I will use the same labels that are

Hybrid-Native

Shared codebase, plus native code. Basically:

it allows you to program your user interfaces (UI) in one language that then orchestrates native UI controls at runtime.

Ionic article Comparing Cross-Platform Frameworks
Hybrid-Web

One codebase, running everywhere. Your code does not get “translated” in native components, but

the UI components you use in your app are actually running across all platforms

Ionic article Comparing Cross-Platform Frameworks

So basically you can code with “web” technologies (HTML/CSS/JS) and your UI components will be shown on the device.

Technologies and platforms in the article

According to the previous definitions, we will analyze the following platforms/technologies:

  • 100% native code: one app for iOS, one app for Android, with their respective languages, IDEs and so on;
  • ReactNative (Hybrid-native);
  • Flutter (Hybrid-native);
  • NativeScript (Hybrid-native);
  • Xamarin (Hybrid-native);
  • Ionic (Hybrid-web) ( – that includes PhoneGap / Cordova).
Image from aalpha

What’s the study about and how do we try to understand if a choice is better than another one

We will briefly describe pros and cons of every technology.
Then we will compare them at high level and a bit in terms of performances.

At this point we will take into account other important factors:

  • Who’s behind them?
  • The community around each technology;
  • Available libs: active development, time to “port” new features, stability, support and community;
  • Trends: we don’t want to adopt a technology that is going to become marginal in the next few years;
  • Very few observations related to the UI/UX aspect.

One by one

100% Native

Basically, we have 2 separated projects here: one for Android, one for iOS.

Thanks to the respective IDE (Android Studio and XCode) you will probably be able to build a good skeleton of your app and a good enough flow between the views with no or few code required, with a “drag & drop” approach for elements (buttons, text fields, …). A very good preview is also available, without using a simulator, for example.

XCode interface builder from apple.com

That’s great. Anyway, all the logic such as performing API calls, dynamically adding/removing content, handling Push notifications, accessing GPS position, accessing camera/gallery and so on, are performed with a platform-specific language: Swift (Objective-C) for iOS, Kotlin (Java) for Android.

All of this just to say that the two apps will live on two separated, parallel projects. You can of course merge some high-level logic regarding how to manage certain flows, but at the end you have to code them separately.

Two separated projects, two separated teams with separated skills. In the worst-case scenario this could double the needed resources.

On the other side you have the maximum reliability when you are going to handle device resources such as the camera or the GPS.

You have all the debugging and profiling power you need in your IDE.

Even regarding 3rd-party libraries (of course here a lot depends on the library itself) you can have less surprises if you are using the specific library for the given platform.

– Firebase

In case of Firebase, an Android native solution is incredibly good, of course, due to the fact that we’re talking about Google solutions.

Also Firebase has an official SDK for iOS, so the native solution is incredibly good for iOS, too.

Hybrid-web

We have just one project here: you will code an HTML/JS/CSS solution, that’s it. All the UI is shared between the platforms, so you have a very consistent UI and you don’t need to have different components for different platforms. You can, or you can simply have a different CSS applied for each platform.

For debugging purposes you have different layers to take care of, but most of the job related to the UI can be done in a web-based debugging tool like the Chrome Developer tools. One place, different platforms.

So, we were saying that we have just one project here, and this is something to carefully think about if you are a start-up. Additionally, you can “re-use” web related skills in your team to build your app, substantially decreasing the resources needed to complete it.

How about the device resources?

For the most common tasks such as taking a picture or accessing the GPS location, you can be sure to find a plugin (official or not) that takes care of these tasks for you.

The main idea is that there is a sort of Javascript API system to communicate with the underlying platform.

Ionic

With Ionic you can be sure to have someone in your team that can build a simple app with a small learning curve, due to the fact that it supports the major Javascript frameworks and libraries such as React, Angular and Vue.

It has 120 native device plugins for Camera, GPS, Bluetooth and so on.

It relies on Capacitor or Cordova to execute your web components within wrappers targeted to each platform, with API bindings to access device’s capabilities.

This is the architecture compared to the native one:

Native vs Ionic apps. From the official Ionic doc.

– Firebase

The Ionic official documentation points to a specific plugin page on GitHub.

This plugin page has a build: failing tag on it (3rd of January 2021): no good.

Let’s see the popularity of the plugin:

Watch68
Stars993
source: plugin page on GitHub, 3rd of January 2021

Anyway, if your choice is Ionic and you want to continue your journey with React, I strongly suggest this book to give you very useful hints on how to build robust web applications with React and Firebase.

For this part, that’s it! In the next part we will focus on the Hybrid-Native solutions.

The post Choosing the technology behind a mobile app for a startup in 2021: Native vs cross-platform (part 1) appeared first on L.S..

il 03 January 2021 08.24

11 December 2020

2020-12-11 18:39:36.965 14746-14746/your.bundle E/RNFirebaseMsgReceiver: Background messages only work if the message priority is set to 'high'

This is something you can find in your adb logcat output when sending a push notification (cloud message) and your app is in background.

More specifically, the problem is mostly related to the idle status of your device.

This is explained in a dedicated Firebase doc, particularly in the “Using FCM to interact with your app while the device is idle” section.

If you are sending a message with curl, Postman and so on, you can add something to your JSON in order to set priority.

So a JSON body like this:

{ 
  "to": "token_here", 
  "notification": { 
    "title": "hey from Lorenzo", 
    "body": "great article dude!" 
  }, 
  "data": { 
    "body": "normal priority notification", 
    "title": "here we are", 
  }
}

becomes:

{ 
  "to": "token_here", 
  "android": {
        "priority": "high"
  },
  "priority": 10,
  "notification": { 
    "title": "hey from Lorenzo", 
    "body": "great article dude!" 
  }, 
  "data": { 
    "body": "normal priority notification", 
    "title": "here we are", 
  }
}

When performing your POST requesto to https://fcm.googleapis.com/fcm/send please remember to add a Content-type: application/json header and a Authorization header with value key=<server_key> that you can retrieve from the “Cloud Messaging” section of your project settings in the Firebase console.

The post Firebase, cloud messaging and problems receiving background notifications appeared first on L.S..

il 11 December 2020 17.58

18 November 2020

Automatic and continuous testing is a fundamental part of today’s development cycle. Given a Gitlab pipeline that runs for each commit, we should enforce not only all tests are passing, but also that a sufficient number of them are present.

cover

Photo by Pankaj Patel on Unsplash

Aren’t you convinced yet? Read 4 Benefits of CI/CD! If you don’t have a proper Gitlab pipeline to lint your code, run your test, and manage all that other annoying small tasks, you should definitely create one! I’ve written an introductory guide to Gitlab CI, and many more are available on the documentation website.

While there isn’t (unfortunately!) a magic wand to highlight if the code is covered by enough tests, and, in particular, if these tests are of a sufficient good quality, we can nonetheless find some valuable KPI we can act on. Today, we will check code coverage, what indicates, what does not, and how it can be helpful.

Code coverage

In computer science, test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.

Wikipedia

Basically, code coverage indicates how much of your code has been executed while your tests were running. Personally, I don’t find a high code coverage a significant measure: if tests are fallacious, or they run only on the happy path, the code coverage percentage will be high, but the tests will not actually guarantee a high quality of the code.

On the other hand, a low code coverage is definitely worrisome, because it means some parts of the code aren’t tested at all. Thus, code coverage has to be taken, as every other KPI based only exclusively on lines of code, with a grain of salt.

Code coverage and Gitlab

Gitlab allows collecting code coverage from test suites directly from pipelines. Major information on the setup can be found in the pipelines guide and in the Gitlab CI reference guide. Since there are lots of different test suites out there, I cannot include how to configure them here. However, if you need any help, feel free to reach out to me at the contacts reported below.

With Gitlab 13.5 there is also a Test Coverage Visualization tool, check it out! Gitlab will also report code coverage statistic for pipelines over time in nice graphs under Project Analytics > Repository. Data can also be exported as csv! We will use such data to check if, in the commit, the code coverage decreased comparing to the main branch.

This means that every new code written has to be tested at least as much as the rest of the code is tested. Of course, this strategy can be easily changed. The check is only one line of bash, and can be easily be replaced with a fixed number, or any other logic.

The Gitlab Pipeline Job

The job that checks the coverage runs in a stage after the testing stage. It uses alpine as base, and curl and jq to query the APIs and read the code coverage.

On self hosted instances, or on Gitlab.com Bronze or above, you should use a project access token to give access to the APIs. On Gitlab.com Free, use a personal access token. If the project is public, the API are accessible without any token. It needs three variables: the name of the job which generates the code coverage percentage (JOB_NAME), the target branch to compare the coverage with (TARGET_BRANCH), and a private token to read the APIs (PRIVATE_TOKEN). The job will not run when the pipeline is running on the target branch, since it would be comparing the code coverage with itself, wasting minutes of runners for nothing.

The last line is the one providing the logic to compare the coverages.

checkCoverage:
    image: alpine:latest
    stage: postTest
    variables:
        JOB_NAME: testCoverage
        TARGET_BRANCH: main
    before_script:
        - apk add --update --no-cache curl jq
    rules:
      - if: '$CI_COMMIT_BRANCH != $TARGET_BRANCH' 
    script:
        - TARGET_PIPELINE_ID=`curl -s "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines?ref=${TARGET_BRANCH}&status=success&private_token=${PRIVATE_TOKEN}" | jq ".[0].id"`
        - TARGET_COVERAGE=`curl -s "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines/${TARGET_PIPELINE_ID}/jobs?private_token=${PRIVATE_TOKEN}" | jq --arg JOB_NAME "$JOB_NAME" '.[] | select(.name==$JOB_NAME) | .coverage'`
        - CURRENT_COVERAGE=`curl -s "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines/${CI_PIPELINE_ID}/jobs?private_token=${PRIVATE_TOKEN}" | jq --arg JOB_NAME "$JOB_NAME" '.[] | select(.name==$JOB_NAME) | .coverage'`
        - if  [ "$CURRENT_COVERAGE" -lt "$TARGET_COVERAGE" ]; then echo "Coverage decreased from ${TARGET_COVERAGE} to ${CURRENT_COVERAGE}" && exit 1; fi;

This simple job works both on Gitlab.com and on private Gitlab instances, for it doesn’t hard-code any URL.

Gitlab will now block merging merge requests without enough tests! Again, code coverage is not the magic bullet, and you shouldn’t strive to have 100% of code coverage: better fewer tests, but with high quality, than more just for increasing the code coverage. In the end, a human is always the best reviewer. However, a small memo to write just one more test is, in my opinion, quite useful ;-)

Questions, comments, feedback, critics, suggestions on how to improve my English? Leave a comment below, or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

il 18 November 2020 09.00

24 October 2020

Categoria: 

Banner Ubuntu 20.10

 

Il 22 ottobre è stato rilasciato Ubuntu 20.10, nome in codice Groovy Gorilla, supportato fino a luglio 2021.

Come di consueto il Gruppo Documentazione ha organizzato le attività di revisione di alcune pagine wiki .
Molto lavoro rimane ancora da fare: tante guide hanno bisogno di essere aggiornate e verificate con la nuova versione 20.10. Per continuare la revisione della documentazione abbiamo bisogno del tuo aiuto!

 

Come contribuire

Partecipare alla redazione e all'aggiornamento della documentazione wiki di Ubuntu-it è piuttosto semplice.
Numerose guide contengono sotto l'indice una dicitura simile a questa: «Guida verificata con Ubuntu: 16.04 18.04 20.04».

  • Se una guida è valida anche per Ubuntu 20.10 ma l'informazione non è riportata sotto l'indice della pagina, è sufficiente aggiungere la verifica per il rilascio 20.10.

  • Se una guida contiene istruzioni non valide con Ubuntu 20.10 puoi aggiornarla per adeguare in contenuti al nuovo rilascio.

Come sempre se trovi nomi sbagliati, errori ortografici, collegamenti errati o altre inesattezze facili da modificare non aspettare e correggili senza timore!


A cura del Gruppo Doc
Inizia a contribuire!

il 24 October 2020 21.33

12 February 2020

Even if currently it’s a beta feature, you can create multiple versions of your agent and publish them to separate environments.

The problem is that the REST URI bindings do not appear to have been updated in the API definition for this feature.

I wrote an issue in the cloud-php Github project, and John Pedrie, working for the project, asked me to try the gRPC way.

Never did it before, so here’s a guide to help you just in case you are in my situation…hopefully it will be integrated soon!

Assuming you are working with PHP7.3 (and a Debian-based Linux server):

Based on the Install gRPC for PHP doc, on our server (running Ubuntu server 19.10, PHP7.3) I had to install some packages:

sudo apt-get install autoconf libz-dev php-dev php-pear
sudo pecl install grpc
sudo pecl install protobuf

Let’s modify the /etc/php/7.3/fpm/php.ini file adding the following lines:

extension=grpc.so
extension=protobuf.so

Restarting php7.3-fpm:

sudo systemctl restart php7.3-fpm.service

And adding the "grpc/grpc": "^v1.1.0" line to the project composer.json.

Now it’s time to change the code. The usual way is something like:

$sessionClient = new SessionsClient();
$session = $sessionClient->sessionName($projectId, $sessionId);
$response = $sessionsClient->detectIntent($session, $queryInput);

And it has to be changed with something like:

$session = $this->_getSessionName($sessionId);
$sessionsClient = new SessionsClient();
$response = $sessionsClient->detectIntent($session, $queryInput);

// WHERE _getSessionName() is:
private function _getSessionName($sessionId)
{
    $projectId = $this->_getProjectId();
    $environment = $this->_getEnvironment();
    return "projects/{$projectId}/agent/environments/{$environment}/users/-/sessions/{$sessionId}";
}

And that’s it.

In the Dialogflow History tab there is still no way to filter conversations by Environment, but there is an Environment indicator in the conversation Detail.

Dialogflow environemnts

The post Dialogflow API, support for versions and environments appeared first on L.S..

il 12 February 2020 15.28

31 January 2020

2 free months of Skillshare Premium!

Lorenzo Sfarra (twilight)

You can get 2 free months of Skillshare Premium, thanks to this referral link.

With Skillshare Premium you will:

  • Get customized class recommendations based on your interests.
  • Build a portfolio of projects that showcases your skills.
  • Watch bite-sized classes on your own schedule, anytime, anywhere on desktop or mobile app.
  • Ask questions, exchange feedback, and learn alongside other students.

That’s it! 🙂

Skillshare logo
  • Get inspired.
  • Learn new skills.
  • Make discoveries.
  • Be curious.

I will be more than happy if you can join my classes and give me some feedbacks. The classes are about almost everything that I do in my life (you can check again my skills on the homepage)!.. 🙂

The post 2 free months of Skillshare Premium! appeared first on L.S..

il 31 January 2020 09.05

16 December 2019

You woke up one day and your Android phone contacts disappeared.

You try to perform a Google Search but you only find strange solutions, mostly related to some Samsung stuff, factory resets, uninstalling some app, etc…

Suddenly you realise that your Google Contacts was empty!

Ok, too much context, the solution.

Open your browser to https://contacts.google.com/.

In the top right corner you have a gear, click on it. Then click on Undo changes:

Now you can chose the version of your contacts that you want to restore: minute ago, hours ago, days ago, months ago, years ago….

And that’s it, the time to sync your phone and they’re back!

The post Restore your Google Contacts (for your Android phone, too) appeared first on L.S..

il 16 December 2019 09.31

09 December 2019

Buon Natale da Banksy

Dario Cavedon (iced)

il 09 December 2019 19.56

14 November 2019


Dalla TV, Bobo Vieri sorride sornione come quando giocava a calcio, e ripete "Shave like a bomber!". Tutti gli uomini che se lo ricordano quando giocava a calcio e faceva parlare di se dentro e fuori dal campo, rispondono al sorriso con approvazione.

Quello che fa (bene) Vieri è il testimonial, ci mette la sua bella faccia, e la sua innata simpatia per vendere un prodotto. Una professione a cui si dedicano anche suoi numerosi ex-colleghi giocatori di calcio, e del mondo dello spettacolo.

Mi fa sorridere quindi quando leggo post così che parlando di Chiara Ferragni scrivono di "vuoto pneumatico". Mi pare lapalissiano dire che Chiara Ferragni è una testimonial di se stessa, che sa usare i nuovi canali per comunicare il suo messaggio (qualunque esso sia). Fa marketing, e lo fa bene, visto il giro di soldi che fattura. Nel suo caso poi il fenomeno è così esteso che, chi scrive di lei, o parla di lei, anche in maniera negativa, si fa pubblicità a sua volta, sfruttandone la fama di riflesso.

Denoto in questi articoli - oltre la malizia usata per attirare clic - una certa incapacità di leggere la realtà e di adattarsi al cambiamento di costumi e di mestieri in atto.

Trovo anche sorprendente che i commentatori di cui sopra non riescano a capire come il fenomeno dei testimonial sia in parte sceso dall'Olimpo delle celebrità televisive, per diramarsi nei mille rivoli dei rappresentanti della borghesia. Al giorno d'oggi i testimonial sono anche degli illustri sconosciuti, con cui magari abbiamo fatto una corsetta al parco domenica scorsa.
il 14 November 2019 13.00

09 November 2019


(Segue Italiano)

🇬🇧🇺🇸 Last october I attended Ubucon Europe 2019, at Sintra in Portugal. Still need some time to gather the right words to explain how well I felt standing there with Ubuntu mates, and how many different good talks I saw.
Meanwhile, you can take a look at the video of my talk, where I speak about my running story: how I began running, why I still run, and why (almost) everybody can do it. Oh! Obviously I explain also what open source software you can use to track safely and securely your runs.
Let me know what you think about it! :-)

🇮🇹 Lo scorso ottobre ho partecipato a Ubucon Europe 2019, a Sintra in Portogallo. Ho bisogno di ancora un po' di tempo per mettere insieme le parole giuste per spiegare come mi sono sentito stare lì con gli amici ubunteri, e quanti bei talk ho sentito.
Intanto, potete dare un'occhiata al video del mio talk, dove parlo della mia storia di corsa: come ho cominciato, perché corro ancora e perché (quasi) tutti possono farlo. Oh! Ovviamente spiego anche che software open source si può usare per tracciare in le proprie corse in modo sicuro.
Fatemi sapere cosa ne pensate! :-)


Thanks very much Ubuntu & Canonical for funding my travel expenses!

(The picture in this post is from Marco Trevisan on Twitter)
il 09 November 2019 10.05

19 April 2019


In occasione della visita in Italia, ci sono state parecchie critiche negative su Greta Thumberg, la ragazzina svedese che ha lasciato la scuola per una sua personale battaglia a favore dell'ambiente, e sulla sua azione per contrastare il cambiamento climatico. Ci sono stati anche attacchi alla sua persona, tanto infondati quanto ignobili, specie se si considera la sua giovane età.

La cosa che sfugge a chi è abituato a leggere solo i titoli dei giornali, e in base a questi farsi un'opinione, è che le aziende petrolifere spendono milioni di dollari per spargere fake news, finanziare politici e rifarsi un'immagine.

Cioè: mentre io sto lì a separare la finestrella di plastica dal resto della busta di carta prima di buttare entrambe nella rispettiva raccolta differenzia "le grandi compagnie petrolifere, benché ufficialmente sostenitrici della lotta ai cambiamenti climatici, operano in realtà nell’ombra al fine di conservare i loro business". Senza contare "le grandi banche internazionali hanno versato 1.900 miliardi di dollari al comparto delle fonti fossili (gas e carbone inclusi)".

Se dopo tutto questo restate indifferenti, se non ve ne frega assolutamente niente che il pianeta Terra sia sempre più invivibile, se pensate che i nostri figli e nipoti che si troveranno a gestirlo in qualche maniera si arrangeranno, dovreste pensare a cosa tenere di più caro (i soldi!) e valutare la possibilità che continuare a finanziare - 100 miliardi di dollari all'anno, di cui 17 miliardi dal Governo Italiano - le fonti di energia fossili è una pessima idea per il proprio portafoglio.

La foto di Greta è tratta da Wikipedia.
il 19 April 2019 14.31

12 April 2019

Bloccato da Apple. Grazie!

Dario Cavedon (iced)


Tediato oltre misura dall'invasiva pubblicità su Twitter del noto marchio di diavolerie elettroniche, che intende rifarsi la verginità sulla privacy a forza di video emotivamente pucciosi, ho risposto a tono a un suo tweet. Sono stato subito bloccato.

Morale della favola: mi sarà impossibile vedere le prossime pubblicità ipocrite di Apple. Grazie Apple!
il 12 April 2019 13.27

01 March 2019

Categoria: 

Il wiki di Ubuntu-it è stato aggiornato per ottimizzare la lettura dei testi da smartphone :)
Le modifiche sono già attive da un paio di settimane, ma è stato necessario fare alcuni aggiustamenti per renderle davvero efficaci. Potrebbe essere necessario effettuare un refresh o ripulire la cache del browser per visualizzare la nuova grafica; a parte questo potrete leggere la documentazione, la newsletter e tutte le altre pagine del wiki da smartphone senza ulteriori problemi.

 

Al momento la modifica entra in funzione solo quando la larghezza della finestra è a misura di smartphone, nella cosiddetta modalità "portrait". In tal caso apparirà il nuovo header semplificato con il classico pulsante per il menù a scomparsa. Si noterà che in questa modalità non sono presenti i link di login/modifica/ecc.
Ruotando lo smartphone (modalità "landscape"), si tornerà a visualizzare il sito come su desktop.

Questa impostazione è stata scelta perché il wiki fu a suo tempo impostato con grafiche a larghezza fissa e tabelle anche complesse che si adattano prevalentemente a una visualizzazione da desktop. È possibile quindi incappare in pagine con contenuti che mal si adattino alla larghezza dei piccoli schermi, pertanto ruotando il dispositivo si ha comunque la possibilità di visualizzare la grafica originaria.

Ad eccezione di alcune pagine che dovranno essere manualmente ritoccate e inevitabili piccoli ritocchi alle impostazioni del wiki, i risultati sono andati oltre alle aspettative. Quindi non è escluso che più in là il responsive design possa essere esteso fino alla visualizzazione su desktop.

 

Aspetti tecnici

Il wiki utilizza la piattaforma MoinMoin. Il suo aspetto grafico è dovuto all'introduzione del tema Light nell'ormai lontano 2011.
Per chi fosse interessato le modifiche hanno riguardato giusto tre file:

  • light.py: al suo interno, insieme al vecchio header è stato inserito il codice html e javascript per disegnare il nuovo header in stile mobile.

  • common.css - screen.css: sono questi i due principali file responsabili per l'aspetto stilistico del wiki. Al loro interno sono state inserite media queries per adattare gli elementi delle pagine e per rendere alternativamente visibile il nuovo o il vecchio header a seconda della larghezza dello schermo.

In definitiva MoinMoin ci ha piacevolmente sorpreso mostrando un buon grado di adattabilità :)

Buona consultazione a tutti!

 

A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 01 March 2019 10.21

19 February 2019

Introduction: Node + express + body-parser + Shopify?

Are you using Node/express/body-parser and Shopify, and (you would like to use) its webhooks?

If the answer is YES to the questions above, I am going to suggest here a way to solve a common problem.

NOTE: This and much more is covered in the Coding Shopify webhooks and API usage: a step-by-step guide class on Skillshare!

The problem

Body-parser does an amazing work for us: it parses incoming request bodies, and we can then easily use the req.body property according to the result of this operation.

But, there are some cases in which we need to access the raw body in order to perform some specific operations.

Verifying Shopify webhooks is one of these cases.

Shopify: verifying webhooks

Shopify states that:

Webhooks created through the API by a Shopify App are verified by calculating a digital signature. Each webhook request includes a base64-encoded X-Shopify-Hmac-SHA256 header, which is generated using the app’s shared secret along with the data sent in the request.

We will not go into all the details of the process: for example, Shopify uses two different secrets to generate the digital signature: one for the webhooks created in the Admin –> Notifications interface, and one for the ones created through APIs. Anyway, only the key is different but the process is the same: so, I will assume that we have created our webhooks all in the same way, using JSON as object format.

A function to verify a webhook

We will now code a simple function that we will use to verify the webhook.

The first function’s param is the HMAC from the request headers, the second one is the raw body of the request.

function verify_webhook(hmac, rawBody) {
    // Retrieving the key
    const key = process.env.SHOPIFY_WEBHOOK_VERIFICATION_KEY;
    /* Compare the computed HMAC digest based on the shared secret 
     * and the request contents
    */
    const hash = crypto
          .createHmac('sha256', key)
          .update(rawBody, 'utf8', 'hex')
          .digest('base64');
    return(hmac === hash);
}

We use the HMAC retrieved from the request headers (X-Shopify-Hmac-Sha256), we retrieve the key stored in the .env file and loaded with the dotenv module, we compute the HMAC digest according to the algorithm specified in Shopify documentation, and we compare them.

We use the crypto module in order use some specific functions that we need to compute our HMAC digest.

Crypto is a module that:

provides cryptographic functionality that includes a set of wrappers for OpenSSL’s hash, HMAC, cipher, decipher, sign, and verify functions.

NOTE: it’s now a built-in Node module.

Retrieving the raw body

You have probably a line like this one, with or without customization, in your code:

// Express app
const app = express();
app.use(bodyParser.json());

 

So your req.body property contains a parsed object representing the request body.

We need to find a way to “look inside the middleware” and to add somehow the information regarding the request status, using our just-defined function: is it verified or not?

Now, we know that the json() function of body-parser accepts an optional options object, and we are very interesting to one of the possible options: verify.

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.

That’s it, we are going to use it like this.

Let’s change the json() call in this way:

// Express app
const app = express();
app.use(bodyParser.json({verify: verify_webhook_request}));

And we create the function verify_webhook_request() according to the signature documented before:

function verify_webhook_request(req, res, buf, encoding) {
  if (buf && buf.length) {
    const rawBody = buf.toString(encoding || 'utf8');
    const hmac = req.get('X-Shopify-Hmac-Sha256');
    req.custom_shopify_verified = verify_webhook(hmac, rawBody);
  } else {
    req.custom_shopify_verified = false;
  }
}

Basically, we check if the buffer is empty: in such case, we consider the message not verified.

Otherwise, we retrieve the raw body using the toString() method of the buf object using the passed encoding (default to UTF-8).

We retrieve the X-Shopify-Hmac-Sha256.

After we have computed the verify_webhook() function, we store its value in a custom property of the req object.

Now, in our webhook code we can check req.custom_shopify_verified in order to be sure that the request is verified. If this is the case, we can go on with our code using the req.body object as usual!

Another idea could be to stop the parsing process: we could do this throwing an error during the verify_webhook_request function.

 

Conclusion

Please leave your comment if you want to share other ways to accomplish the same task, or generically your opinion.

 

NOTE: Remember that this and much more is covered in the Coding Shopify webhooks and API usage: a step-by-step guide class on Skillshare!

The post Node, body-parser and Shopify webhooks verification appeared first on L.S..

il 19 February 2019 16.49

07 February 2019

Categoria: 

Ubuntu 14.04 LTS End of Life

 

Ubuntu 14.04 LTS Trusty Tahr raggiungerà la fine del suo ciclo di vita il 30 aprile 2019 e, da allora, sarà disponibile Ubuntu 14.04 LTS - ESM, ovvero Extended Security Maintenance. Si tratta di una funzionalità disponibile con Ubuntu Advantage, il pacchetto di supporto commerciale di Canonical, oppure che può anche essere acquistata su base stand-alone. L'Extended Security Maintenance è stato creato per aiutare a semplificare il processo di migrazione verso le nuove piattaforme aggiornate, mantenendo gli standard di conformità e sicurezza.

 

L'introduzione di Extended Security Maintenance per Ubuntu 12.04 LTS è stato un passo importante per Ubuntu, portando patch di sicurezza critiche e importanti oltre la data di fine vita di Ubuntu 12.04. ESM viene utilizzato dalle organizzazioni per risolvere problemi di sicurezza e conformità mentre gestisce il processo di aggiornamento a una versione più recente di Ubuntu in grado di garantire pieno supporto. La disponibilità di ESM per Ubuntu 14.04 significa che l'arrivo dello status di End of Life di Ubuntu 14.04 LTS Trusty Tahr nell'aprile 2019 non dovrebbe influire negativamente sulla sicurezza e sulle conformità delle organizzazioni che lo utilizzano ancora come sistema operativo in essere. In totale, ESM ha fornito oltre 120 aggiornamenti, comprese correzioni per oltre 60 vulnerabilità con priorità alta e critica, per gli utenti di Ubuntu 12.04.

 

Ancora una volta, lo abbiamo segnalato anche in passato sulla newsletter (Ubuntu in prima linea per la sicurezza), risulta estremamente chiaro come Canonical metta la sicurezza al centro di Ubuntu, oltre che nelle pratiche e nella architettura dei prodotti a esso relativi sia lato business che lato consumer.

 

Fonte: blog.ubuntu.com

il 07 February 2019 09.57

05 February 2019

Categoria: 

Ad Aprile cesserà il supporto per la gloriosa Ubuntu 14.04.
Questo vuol dire che nella sezione contenente i resoconti delle installazioni di Ubuntu sui computer portatili, molti dei link alle guide passeranno nella famigerata colonna Guide da aggiornare.

Fortunatamente i computer che funzionano con Ubuntu 14.04 supportano anche le versioni successive, come Ubuntu 16.04 e 18.04 (nella maggior parte dei casi).
Sul tuo portatile hai continuato a installare versioni successive alla 14.04? Aggiornare la relativa pagina sarà davvero semplice ;)

 

Come aggiornare la pagina?

Quando si accede all'editor per modificare una pagina, si noterà nella parte in alto la macro Informazioni che si presenta in questo modo:
<<Informazioni(forum="..."; rilasci="13.10 14.04";)>>

Al suo interno è presente la voce rilasci che contiene tra virgolette le versioni con le quali la guida è stata testata. Bene, non occorre far altro che aggiungere il numero di versione di uno dei rilasci di Ubuntu attualmente supportati.

Pertanto, supponendo di aver testato con successo la guida con... ad esempio la 18.04, basta aggiungere il numero all'interno della macro che diviene:
<<Informazioni(forum="..."; rilasci="13.10 14.04 18.04";)>>

Niente di che, vero? ;)

 

In generale è buona cosa...

Per ovvi motivi abbiamo messo in primo piano le pagine sui portatili testate con la 14.04. Il discorso si estende comunque anche alle pagine già presenti nella colonna delle guide da aggiornare e, ancora più in generale, a qualsiasi tipo di guida presente nel wiki.
Come vedi se si incappa in una pagina testata con una versione obsoleta di Ubuntu, per confermarne la validità con una versione supportata è una questione di attimi.

 

A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 05 February 2019 17.54

25 January 2019

Categoria: 

 

Con l'inizio del 2019 arriva un'importante novità riguardante la Documentazione Wiki.

Il Gruppo Doc ha deciso di lasciare maggiore iniziativa agli utenti del wiki in materia di creazione di nuove guide e aggiornamento delle pagine esistenti.
Cosa significa? Facciamo un passo indietro...

 

Da sempre il Gruppo Doc si è preoccupato di assistere e supervisionare il lavoro svolto dagli utenti. Cioè essere presenti sul forum per valutare se una guida proposta fosse necessaria o attinente agli scopi della documentazione... oppure, una volta che un utente ha svolto il lavoro, tempestivamente revisionare e adattare i contenuti agli Standard, inserire il link alla guida nel portale di appartenenza, ecc..

Tutto questo per garantire una crescita razionale e ben ordinata della documentazione.
Il lato debole di questo metodo di lavoro è che non sempre può essere garantita una presenza continuativa da parte dello staff, sebbene negli ultimi sette anni ci siamo in massima parte riusciti :)

 

La nostra attività di supporto e supervisione andrà avanti, è stato però deciso di aggiornare l'iter in modo tale che anche in nostra assenza un utente possa agire di sua iniziativa. Questo per evitare appunto che una guida rimanga a lungo bloccata prima di essere pubblicata.
In sostanza si è cercato di mantenere le buone pratiche attuate in questi anni e di aggiungere un pizzico di responsabilità in più da parte degli utenti.

Queste le pagine che riportano le principali modifiche al metodo di lavoro:

  • GruppoDocumentazione/Partecipa: mostra gli step da seguire, con alcune novità come la possibilità di aggiornamento degli indici dei portali tematici e l'aggiornamento della tabella delle guide svolte.

  • GuidaWiki/Standard: schematizzata la parte del formato del wiki-testo, sono state introdotte le frasi standard e alcuni accorgimenti stilistici che chiediamo agli utenti di seguire quanto più possibile.

  • GruppoDocumentazione/Partecipa/FAQ: logico adeguamento a quanto riportato sopra.

 

Un saluto e che il 2019 porti buone cose alla Documentazione Wiki :)

 

A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 25 January 2019 16.25

19 January 2019

Categoria: 
Ecco le novità introdotte nella documentazione della comunità italiana di Ubuntu.
 
Portale Ambiente Grafico
  • PCmanFm Menu Stampa: nuova guida per abilitare la funzionalità di stampa dal menù contestuale del file manager PCmanFM.
 
Portale Amministrazione Sistema
  • Aggiornare Kernel: aggiornamento procedura di installazione del kernel e inserimento di nuove pagine di riferimento.
  • Apper: nuova guida per il gestore di pacchetti opzionale per Kubuntu.
 
Portale Hardware
  • AsusX53S-K53SC: resoconto di installazione di Ubuntu su questo portatile.
  • Acer Aspire 5612wlmi: resoconto di installazione di Ubuntu su questo portatile.
  • Acer Aspire ES1-524: resoconto di installazione di Ubuntu su questo portatile.
  • Lenovo Yoga 730_13IWL: resoconto di installazione di Ubuntu su questo portatile.
  • Epson Multi: aggiornamento della guida per installare stampanti mono o multifunzione Epson su Ubuntu.
  • Scanner Epson: aggiornamento della guida per installare scanner Epson su Ubuntu.
 
Portale Installazione
  • Installare Ubuntu: revisione dell'intera guida con l'inserimento di nuovi contenuti.
 
Portale Internet e Rete 
  • Accelerazione Hardware: nuova guida che illustra la procedura per abilitare l'accelerazione hardware su Chromium e gli altri browser da esso derivati, quali Google Chrome, Opera e Vivaldi.
  • Opera: aggiornamento della guida per le nuove versioni del browser.
  • Thunderbird: aggiornamento della guida per il famoso client di posta elettronica libero, che nelle ultime versioni  utilizza un nuovo motore di rendering.
 
Portale Programmazione
  • CMake Gui: aggiornamento guida su questo strumento, utile per controllare il processo di compilazione di software.
  • Pip: nuova guida all'installazione e al primo utilizzo su Ubuntu di pip, gestore di pacchetti per Python.
 
Portale Ufficio
  • Stardict: revisione completa della guida.
 
Per maggiori informazioni, consulta la pagina Lavoro Svolto/2018.
 
 
il 19 January 2019 21.30

08 December 2018

Scanner Epson V10: aggiornamento

Salvatore Palma (totò)

Questa guida è stata testata su:
Bionic Beaver

Tempo fa avevo scritto questo articolo, in cui indicavo i passi per l’installazione dello scanner in ogetto .

Recentemente ho fatto l’aggiornamento da Xenial Xerus a Bionic Beaver, e nel seguire le indicazioni presenti nell’articolo indicato prima, lo scanner ancora non veniva rilevato, cosi facendo la ricerca nel wiki italiano (qui la guida), ho scoperto che vanno aggiunti dei passaggi aggiuntivi, quindi riepilogando:

  • scaricare i driver a questo indirizzo;
  • installare i tre pacchetti, iscan-data_[versione corrente]_all.deb, iscan_[versione corrente]~usb0.1.ltdl7_i386.deb e iscan-plugin-gt-s600_[versione corrente]_i386.deb, oppure lanciare lo script install.sh presente all’interno della cartella (prima va reso eseguibile)
  • installare i pacchetti libltdl3, libsane-extras e sane;
  • editare il file /etc/sane.d/dll.conf e commentare la riga #epson;
  • editare il file /etc/udev/rules.d/45-libsane.rules e aggiungere le seguenti righe:

# Epson Perfection V10
SYSFS{idVendor}=="04b8", SYSFS{idProduct}=="012d", MODE="664", GROUP="scanner"

  • editare il file /etc/udev/rules.d/79-udev-epson.rules e aggiungere le seguenti righe:

# chmod device EPSON group
ATTRS{manufacturer}=="EPSON", DRIVERS=="usb", SUBSYSTEMS=="usb", ATTRS{idVendor}=="04b8", ATTRS{idProduct}=="*", MODE="0777"

  • digitare il seguente comando, in base all’architettura del proprio pc:

Per architettura 32 bit sudo ln -sfr /usr/lib/sane/libsane-epkowa* /usr/lib/i386-linux-gnu/sane
Per architettura 64 bit sudo ln -sfr /usr/lib/sane/libsane-epkowa* /usr/lib/x86_64-linux-gnu/sane

  • infine digitare il seguente comando per riavviare udev:

sudo udevadm trigger

il 08 December 2018 12.30

28 November 2018

Introduction: Firebase CLI

According to the github page of the firebase-tools project, the firebase cli can be used to:

  • Deploy code and assets to your Firebase projects
  • Run a local web server for your Firebase Hosting site
  • Interact with data in your Firebase database
  • Import/Export users into/from Firebase Auth

The web console is fine for a lot of stuff.

But let’s see what we can do with the CLI, and let’s check what we CANNOT do without it.

Installation

Ok, easy.

Assuming you have npm installed, run the following command:

npm install -g firebase-tools

You will now have the command firebase, globally.

Now, run:

firebase login

to authenticate to your Firebase account (this will open your browser).

Firebase tasks

Seleting projects

List your firebase projects:

firebase list

You can then select one of them with

firebase use <project>

Cloud Functions Log

A common thing you would like to do is looking at the logs.

This is possible with:

firebase functions:log

At this point I think there is no way to “listen” for changes in the log, that would be a very useful feature.

The

gcloud app logs tail

sadly does not help here, even if the current selected project is the firebase one.

If you have some tips about it, I will be more than happy to edit this article.

Configuration

The functions configuration is handled through some commands:

firebase functions:get|set|unset|clone

to retrieve|store|remove|clone project configuration (respectively).

Emulate locally

Running

firebase functions:shell

will let you choose which of your functions to emulate locally.

You can then choose the method (get/head/post/put/patch/del/delete/cookie/jar/default) and the test data to start the emulation.

Delete functions

You can then delete one or more cloud functions with:

firestore functions:delete <function_name>

 

Deploy and serve locally

To serve your project with cloud functions locally, you can run:

firebase serve

When you’re ready to deploy your firebase project, you can run:

firebase deploy

 

Accounts management

With the firebase cli you can both import and export Firebase accounts with the:

firebase auth:import|export <file>

Surprisingly enough, import import accounts from a file, and and export will export accounts to a file.

 

Database

I will skip this part. It’s very well documented everywhere in the firebase ecosystem.

 

Firestore

Here we are.

What we can do with Firestore with our CLI? Almost nothing, I am afraid.

Right now you have only two things we can accomplish.

But we can use gcloud and gsutil to perform other operations, lick exporting / importing data.

Be sure to be logged in with

gcloud auth login

If you are already logged in with different accounts, you can list the accounts with

gcloud auth list

and select one (if the one you need is not the active one) with

gcloud config set account myaccount@gmail.com

At this point, let’s select our project.

gcloud projects list

to list all the projects and

gcloud config set project project_name

to select our project.

Exporting data

First of all, we need a Storage Bucket.

gsutil ls

to see the list of possible buckets to use.

Let’s say that we want to export our data to gs://myproject-abcd.appspot.com/ :

gcloud firestore export gs://myproject-abcd.appspot.com/

You can even export only some collections:

gcloud firestore export gs://myproject-abcd.appspot.com/ --collection-ids=[COLLECTION_ID_1],[COLLECTION_ID_2]

Importing data

You can imagine it, right?

gcloud firestore import gs://myproject-abcd.appspot.com/2019-12-23T23:54:39_76544/

The operation takes some time (proportional to the data size that has to be imported). You can safely close your terminal, the operation will continue.

Checking operations status

Import and export can take time (import takes time even with a very small database).

To list the operations:



gcloud firestore operations list

And you will see something like:

done: true
metadata:
'@type': type.googleapis.com/google.firestore.admin.v1.ImportDocumentsMetadata
endTime: '2019-12-23T15:48:03.747509Z'
inputUriPrefix: gs://myproject-abcd.appspot.com/2019-12-23T23:54:39_76544
operationState: SUCCESSFUL
progressBytes:
completedWork: '2601'
estimatedWork: '2601'
progressDocuments:
completedWork: '8'
estimatedWork: '8'
startTime: '2019-12-23T15:47:25.089261Z'
name: projects/myproject-abcd/databases/(default)/operations/AiAydsadsadsadsadVhZmVkBxJsYXJ0bmVjc3Utc2Jvai1uaW1kYRQKLRI
response:
'@type': type.googleapis.com/google.protobuf.Empty

Indexes (read)

You can look at your indexes:

firebase firestore:indexes

This will list the indexes in a JSON-form of array of objects described as the following one:

    {
    "indexes": [
      {
      "collectionId": "<collection_id>",
      "fields": [
        {
          "fieldPath": "<field_name_one>",
          "mode": "ASCENDING|DESCENDING"
        },
        {
          "fieldPath": "<field_name_two>",
          "mode": "ASCENDING|DESCENDING"
        }
      ]},
      {..}
    ]}

We cannot perform other operations over indexes.

Collections and documents: delete recursively

It’s easy from the web console to delete a document, and it’s easy to do it programmatically.

It’s easy on the CLI, too.

But the nightmare of the web console (and of the programmatic approach, too) is that it does not exist a simple and fast way to recursively delete a collection.

This is luckily possible with the CLI.

You can recursively delete a collection by using:

firebase firestore:delete --recursive <collection>

Bonus: delete all collections

Now, assuming you are testing something and your firestore is full of garbage, you might want to start from scratch deleting every collection.

This is possible running:

firebase firestore:delete --all-collections

If you are looking at your database in the firebase console, please remember to refresh if the UI is not updated (that means that you still see the root collections).

Conclusion

This concludes the article for now.

I hope that the Firestore-side will be developed with other features and commands, because right now is very limited.

One of the most feature I can think of, generally speaking, would be the chance to “tail” logs in the shell.

I would be more than happy if someone can integrate with useful tools and additional stuff.

The post Exploring Firebase CLI with some Firestore tips appeared first on L.S..

il 28 November 2018 17.19

29 October 2018

Handling execution time for PHP (FPM)

Lorenzo Sfarra (twilight)

The problem: setting max_execution_time / using set_time_limit() I don’t get the wanted results

If you are here, you are probably looking for an answer to the question:

why set_time_limit() is not working properly and the script is killed before?

or

why ini_set(‘max_execution_time’, X), is not working properly and the script is killed before?

The answer is not exactly one, because there are many variables to consider.

The concepts: PHP itself vs FPM

Anyway, I will list basic concepts that will help to find the best path to your answer.

The first, brutal, note is: directives you specify in php-fpm.conf are mostly not changeable from ini_set().

Please also note that set_time_limit() is a almost nothing more than a convenience wrapper on ini_set() in the background, so the same concept applies.

For simplicity, in this post I will talk about set_time_limit(), but the same applies to ini_set(whatever_related_to_time).

According to what we already said, it’s important to understand that set_time_limit() can only “overwrite” the max_execution_time directive, because

they only affect the execution time of the script itself

(there is a specific note in the official documentation).

On the other hand, directives like request_terminate_timeout are related to FPM, so we are talking about the process management level here:

should be used when the ‘max_execution_time’ ini option does not stop script execution for some reason.

Web server dependent directives

What we have seen so far aren’t the only variables in the game.

You should check your web server configuration, too, to be sure nothing is playing a role in your timeout problem.

A very common example is the fastcgi_read_timeout, defined as:

a timeout for reading a response from the FastCGI server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the FastCGI server does not transmit anything within this time, the connection is closed.

So…what to do?

The basic answer is: have perfectly clear what was defined by web server (ex.: fastcgi_read_timeout), what defined by FPM (ex.: request_terminate_timeout) and what was defined by PHP itself (ex.: max_execution_time).

For example, if request_terminate_timeout is set to 1 second, but max_execution_time is set to 2 seconds, the script will end after 1 second (so, whichever comes first, even if you have set max_execution_time with ini_set() in your script) .

A possible solution is simply using max_execution_time in place of request_terminate_timeout (but use it, too, to define a solid maximum upper-bound).

The max_execution_time description in the official documentation has an hint about this common problem, that is:

Your web server can have other timeout configurations that may also interrupt PHP execution. Apache has a Timeout directive and IIS has a CGI timeout function. Both default to 300 seconds. See your web server documentation for specific details.

Any hints to share?

The post Handling execution time for PHP (FPM) appeared first on L.S..

il 29 October 2018 14.27

26 October 2018

Categoria: 

Gruppo Documentazione

Ecco le novità introdotte nella documentazione della comunità italiana di Ubuntu.

 

Aggiornamenti per il rilascio di Ubuntu 18.10!

Questo mese ha visto l'arrivo della nuova versione Ubuntu 18.10. Si tratta di una versione "intermedia" con supporto di 9 mesi e sarà quindi supportata fino a Luglio 2019. Come di consueto il Gruppo Documentazione si è attivato per aggiornare una corposa lista di pagine fra cui quelle relative al download, all'installazione e all'aggiornamento del sistema, ai repository.. e molte altre ancora.
Per maggiori dettagli consulta la pagina GruppoDocumentazione/Cosmic.

 

Portale Ambiente Grafico

  • Dolphin Menu Stampa: ottenere voci relative alle funzionalità di stampa nel menù contestuale del file manager Dolphin in KDE5 attivabile tramite clic destro.

 

Portale Amministrazione Sistema

  • Apt: istruzioni sull'utilizzo di APT, il sistema di gestione dei pacchetti .deb, predefinito in Ubuntu.

 

Portale Hardware

 

Portale Installazione

  • Creazione LiveUsb: aggiornata la tabella dei programmi utili per la creazione di chiavette USB avviabili con Ubuntu.

 

Portale Internet e Rete

  • Firefox ESR: installazione e utilizzo del browser web Firefox ESR, versione ufficiale con supporto esteso di Firefox.

  • Irssi: istruzioni utili per l'installazione e l'utilizzo di Irssi, un client IRC che permette di comunicare in modalità testuale dalla riga di comando.

  • MlDonkey: installazione e utilizzo di questo programma per il peer to peer estremamente potente con funzione client e server.

  • Pidgin: installazione e configurazione di questo client di messaggistica istantanea multi-protocollo.

  • Telegram: installare la versione desktop di Telegram su Ubuntu.

 

Per maggioro informazioni, consulta la pagina Lavoro Svolto/2018.

 


A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 26 October 2018 16.36

29 August 2018


Venerdì 31 agosto prossimo parteciperò a ESC 2018 che si tiene presso il Forte Bazzera, nelle vicinanze dell'Aeroporto di Venezia. 

ESC - End Summer Camp - si svolge dal 29 agosto al 2 settembre, ed è un evento particolarmente interessante, ricco di appuntamenti su temi che riguardano open source, hacking, open data e... beh, il programma è talmente vasto che è impossibile riassumerlo in poche righe, dategli un'occhiata anche voi!

Il sottoscritto sarà a ESC per parlare di Ubuntu Touch, che è l'attività che ultimamente mi sta prendendo più tempo, del mio poco tempo libero. Dopo che l'anno scorso Canonical ha abbandonato lo sviluppo di Ubuntu Touch, una comunità - sempre più numerosa - si è aggregata attorno al progetto UBports, inizialmente portato avanti dal solo benemerito Marius Gripsgard, che ha proseguito il lavoro da dove Canonical si era fermata. Proprio pochi giorni fa, UBports ha ufficialmente rilasciato OTA4, il primo aggiornamento basato su Ubuntu 16.04, ma soprattutto una serie lunghissima di bug fix e miglioramenti, dopo un lungo lavoro di assestamento e pulizia.

Still Alive!


Adesso è tempo che tutto il mondo sappia che Ubuntu Touch è vivo, e probabilmente la migliore cosa che sta accadendo in tutto il panorama dell'Open Source mondiale. Passato lo scandalo delle rivelazioni di Snowden, sembra che le persone si siano dimenticate (rassegnate?) di quanto invasiva sia la sistematica violazione della privacy personale operata da tutti gli smartphone in commercio. Pratica questa che espone la propria vita non solo alle ben note aziende tecnologiche americane del settore (Apple, Google, Facebook, Yahoo), ma anche a tutti i pirati che ogni giorno rubano tonnellate di dati personali alle stesse aziende.

Ubuntu Touch è la risposta open source alla domanda di maggiore controllo e tutela dei propri dati, oltre che un sistema operativo davvero libero e aperto.

Se venite a Forte Bazzera venerdì 31 agosto alle 11.30 vi parlerò di questo (e anche tanto altro!). Vi aspetto!
il 29 August 2018 21.15

23 July 2018

Categoria: 

https://www.ubuntu-it.org/sites/default/files/news_wiki17.png

Ecco le novità introdotte nella documentazione della comunità italiana tra aprile e giugno 2018.

 

Aggiornamenti per il rilascio di Ubuntu 18.04 LTS!

Aprile ha visto l'arrivo della nuova versione Ubuntu 18.04. Si tratta di una versione LTS e sarà quindi supportata per i prossimi 5 anni. Come di consueto il Gruppo Documentazione si è attivato per aggiornare una corposa lista di pagine fra cui quelle relative al download, all'installazione e aggiornamento del sistema, ai repository.. e molte altre ancora.

GruppoDocumentazione/Bionic

 

Pubblicato il nuovo Portale Voting Machine Scuola Lombardia!

https://www.ubuntu-it.org/sites/default/files/votingMachineLombardia.png

Il 2018 ha visto l'arrivo di un evento inaspettato nel mondo linuxiano. Una miriade di "voting machine" (i dispositivi utilizzati in un recente referendum in Lombardia) sono state riconvertite per essere utilizzate nelle scuole lombarde. Come sistema è stato scelto Ubuntu GNOME 16.04 LTS.
È stato quindi creato un portale con informazioni utili al riguardo. Ringraziamo Matteo Ruffoni per averci portato a conoscenza dei fatti lo scorso Marzo durante lo svolgimento del Merge-it a Torino.

  • VotingMachineScuolaLombardia: un nuovo portale dedicato alla voting machine, il dispositivo di voto elettronico fornito alle scuole della Lombardia con preinstallato Ubuntu!

  • /Hardware: specifiche delle voting machine.

  • /PrimoAvvio: operazioni da compiere durante il primo avvio della voting machine.

  • /CambiarePassword: cambiare la password dell'utente amministratore.

  • /Utilizzo: caratteristiche e funzionalità del sistema operativo Ubuntu GNOME presente nella voting machine.

  • /ProgrammiPreinstallati: applicazioni di utilizzo più comune preinstallate in Ubuntu GNOME 16.04 LTS.

  • /Gcompris: utilizzo dell'applicazione GCompris, una suite di giochi didattici per bambini da 2 a 10 anni.

  • /AccountUtente: operazioni da compiere per creare nuovi utenti diversi dall'utente amministratore.

Si segnala inoltre il gruppo di discussione Lavagna libera in cui molti docenti si occupano in prima persona dell'introduzione del software libero nelle scuole. Fra gli argomenti discussi ovviamente si possono trovare molte testimonianze dirette sull'utilizzo delle voting machine.

 

Portali Ambiente Grafico e Amministrazione Sistema

 

Portale Hardware

 

Portale Internet e Rete

  • TeamSpeak: applicazione VoIP.

  • Tor: utilizzare il sistema di navigazione anonima Tor su Ubuntu.

 

Portale Multimedia

  • Andour: una digital audio workstation per Ubuntu.

  • Kodi: programma open source e multipiattaforma per la gestione di un completo media center o Home theater PC.

  • Rosegarden: audio/midi sequencer ed editor di spartiti musicali.

 

Portali Server e Ufficio

  • Xampp: utile programma multipiattaforma che permette in pochi passaggi un'installazione semplificata di server web Apache, MySQL, PHP, Perl e altri strumenti.

  • Atom: editor di testo basato su Electron.

 

Altro

  • Libri Linux: elenco di pubblicazioni dedicate all'utilizzo e all'amministrazione del sistema GNU/Linux.

Per maggioro informazioni, consulta la pagina Lavoro Svolto/2018.

 


A cura del Gruppo Doc
Vuoi contribuire al Wiki? Comincia subito!

il 23 July 2018 21.18

04 May 2018

Categoria: 

bionic_flower.png
Foto originale: Crocus Wallpaper by Roy Tanck

Il 26 Aprile è stato finalmente rilasciato Ubuntu 18.04 LTS, nome in codice Bionic Beaver.
Come forse già saprete, si tratta di una release molto attesa ed importante: è la prima Long Term Support con preinstallato il desktop GNOME.
Sarà supportata fino al 2023. Per scoprirne tutte le novità potete consultare questa pagina.

Come di consueto, il Gruppo Documentazione ha organizzato l'opera di aggiornamento e revisione delle principali pagine del wiki.
Ma molto lavoro rimane ancora da fare. Infatti tante guide hanno bisogno di essere verificate o aggiornate con la nuova versione 18.04.

Come contribuire

La maggior parte delle guide contiene, sotto l'indice in alto a destra, una dicitura simile a questa: «Guida verificata con Ubuntu: 14.04 16.04 17.10».

  • Vi siete accorti che una guida già esistente è valida anche per Ubuntu 18.04?
    Perfetto! fatecelo sapere attraverso la discussione dedicata presente nel forum (trovate il link sempre sotto l'indice), oppure contattateci. Se invece siete già iscritti al Wiki potete verificarla direttamente voi.

  • Vi siete accorti che una guida esistente contiene istruzioni non valide con Ubuntu 18.04?
    Bene, potete farcelo sapere sempre contattandoci. Inoltre se siete iscritti al Wiki potete provvedere da soli a correggerla.

Non siete ancora iscritti al Wiki? cosa aspettate?! iniziate da subito a contribuire! 🙂

il 04 May 2018 08.13

30 April 2018

Basta calcio

Dario Cavedon (iced)


"Il bello della sconfitta sta innanzitutto nel saperla accettare. Non sempre è la conseguenza di un demerito. A volte sono stati più bravi gli altri. Più sei disposto a riconoscerlo, quando è vero, quando non stai cercando di costruirti un alibi, più aumentano le possibilità di superarla. Anche di ribaltarla. La sconfitta va vissuta come una pedana di lancio: è così nella vita di tutti i giorni, così deve essere nello sport. Sbaglia chi la interpreta come uno stop nella corsa verso il traguardo: bisogna sforzarsi di trasformarla in un riaccumulo di energie, prima psichiche, nervose, e poi fisiche." (Enzo Bearzot)
Quando ero giovane, molto giovane, giocavo a calcio. Tutti i bambini giocavano a calcio. Era lo sport preferito, anzi, era il passatempo preferito dei bambini. In estate si stava praticamente tutto il giorno sul campetto vicino a casa, a tirare calci al pallone. Quando non eravamo al campetto, eravamo sul cortile di casa, sempre a tirare calci al pallone. Io ero universalmente considerato scarso - forse il più scarso. Giocavo in difesa, ma spesso finivo a giocare in porta, dove nessuno voleva mai stare.

Fatalmente, ogni tanto si rompeva qualche vetro: è incredibile quanto facilmente si possa rompere un vetro, pur tirando pianissimo il pallone. Mamma si "vendicava" a modo suo: il giardino limitrofo al cortile era cosparso di rose con spine così appuntite da bucare anche il miglior pallone di cuoio. 
Si può dire che la mia infanzia sia trascorsa così, tra vetri rotti, palloni bucati e jeans rovinati dalle scivolate sull'erba. Altri tempi.

Non lo so come sia finito il calcio al suo attuale livello, qualche anno fa ne descrissi alcune disgrazie, alcune sono ancora attuali, altre sono addirittura peggiorate. Ricordo anche un tentativo di Roberto Baggio di cambiare direzione, finito nel nulla. Adesso però basta.

Nel mio immaginario romantico, i principali sentimenti che accompagnano lo sport sono il divertimento, lo spirito olimpico di partecipazione, l'agonismo positivo che insegna a migliorare e superare i propri limiti. Potrò anche sbagliarmi, ma vedo poco di tutto questo nel calcio italiano.

Gioire delle sconfitte altrui, augurare il peggio all'avversario, vedere solo le colpe altrui, immaginare complotti a favore di questa o quella squadra, rende persone tristi e astiose. Rende le persone peggiori, e (esagero) il anche il mondo un po' peggio di quello che era prima.

Preferisco spendere le mie poche energie per costruire un mondo - quel poco che mi circonda - un po' migliore di quello che ho trovato.

(Nella foto: Bearzot gioca a scopone al ritorno dai vittoriosi mondiali di Spagna 1982, in coppia con Causio e contro Zoff e il presidente Pertini).


il 30 April 2018 20.16

22 April 2018

 

La prossima settimana parteciperò a UbuCon Europe 2018, a Xixòn (Gijon), nelle Asturie, in Spagna.

UbuCon Europe


UbuCon Europe è l'appuntamento europeo di sviluppatori e utenti di Ubuntu (e non solo). È l'evento che ha sostituito gli Ubuntu Developer Summit, organizzati da Canonical.
UbuCon è un'opportunità unica per incontrare di persona le tantissime persone che in un modo o nell'altro partecipano allo sviluppo e al supporto di Ubuntu. Ricordo che Ubuntu (anche se fondata dal benemerito dittatore Sudafricano Mark Shuttleworth) è un progetto nato in Europa (la Gran Bretagna ne fa ancora parte, almeno geograficamente ;-) ), e in Europa le comunità di volontari ubunteri sono particolarmente attive.

Per me sarà l'occasione di rivedere alcuni amici che hanno partecipato all'avventura degli Ubuntu Phone Insiders, e conoscere di persona molte delle persone che ho solo visto o letto su Internet. È sempre un'esperienza piacevole parlare a quattrocchi con chi si è abituati a leggere sulle mail o sui blog, perché il contatto umano e l'esperienza diretta valgono più di un milione di parole.

L'evento si tiene a Xixòn (Gijon in spagnolo), che è la città di Marcos Costales, contact della Comunità Spagnola di Ubuntu, e a cui si deve l'organizzazione di questa edizione di UbuCon Europe. Marcos, che conosco dai tempi degli Insiders, è una delle persone più attive nell'ambito di Ubuntu, e anche sviluppatore di uno dei più bei programmi per Ubuntu Phone: uNav. Devo proprio a Marcos la mia partecipazione: è stato lui che mi ha spinto a lasciare la mia pigrizia sul divano e prenotare il volo per Oviedo.

La scaletta dell'evento è molto ricca: nel momento migliore (o peggiore, visto che il sottoscritto non è dato il dono dell'ubuquità) ci sono ben quattro talk diversi contemporaneamente. Per me sarà un problema decidere quale seguire, sono così tante le cose da sapere e imparare! Alla fine credo mi dovrò focalizzare sugli argomenti che già seguo (poco) nell'ambito della comunità, evitando di impelagarmi in altri nuovi progetti.

Il mio intervento


Il mio talk sarà sabato mattina, e parlerò di "Social Media for Open Source Communities". Dello stesso argomento ho parlato (slide su Slideshare) anche al recente MERGE-it 2018, ma questo talk sarà più ampio, perché parlerò anche di strategie per i social media. Ma soprattutto sarà il mio primo talk in inglese: Sant'Aldo Biscardi, protettore degli Italiani che parlano male inglese, aiutami tu!

Per me si tratta di una bella sfida, spero di riuscire a non dire troppe cavolate, ma del resto l'argomento social è decisamente sottovalutato in molte comunità, quando invece il contatto diretto che consentono i canali sociali è importantissimo sia in fase di promozione, che per il supporto ai propri "clienti".

Resoconto e live tweeting


I talk e i workshop si sviluppano da venerdì 26 a domenica 28 Aprile, io cercherò di presenziare a quanti più possibile... ma non più di uno alla volta! Purtroppo non è previsto un web streaming, e neanche riprese video dei talk (anche se ho proposto a Marcos una soluzione che forse permetterà di coprire una parte dei talk).

Vi darò un resoconto dell'evento su questo blog, ma soprattutto seguirò l'evento anche con un live tweeting su Twitter, quindi seguitemi lì per essere aggiornati in tempo reale! :-)

Si tratta della mia prima volta a un evento internazionale dedicato al mondo FLOSS, e davvero non vedo l'ora!
il 22 April 2018 16.39