Challenges in my learning path in Quantum Machine Learning
This road is not simple…
Introduction
For the last 8 months I have been writing about my QML learning path, basic concepts on this subject and some simple tutorials, if you already follow me I am very grateful and if you don’t follow me yet, feel free to read my posts and follow me if you do want and like.
It has been a fun and surprising road, when I started I never thought I would have more than 17k reads and nearly 1k followers. Its doesn’t seem like a huge number, but I am really grateful for your attention, because I am writing mainly to solidify and share my findings.
It’s been 2 months since my last post and I haven’t had time to think of new ideas to post here. In the meantime, I’ve been focused on my daily work and some QML researches, so I haven’t had much time to put in the effort to write a new post. However, I thought I could at least share what I’ve been doing lately regarding QML, since this space is for sharing my discoveries in my QML learning path.
In this post I am just sharing what I am learning and the challenges I am facing right now. I encourage you to comment and suggest new ideas if you wish.
Where I am and what are the challenges I am facing now
QSVM
In this post I made a simple introduction to Quantum enhanced SVMs (QSVM) and I’ve been trying to design quantum kernels that outperform classical kernels. The main challenge here is not just the design, but testing with practical cases, as I am using quantum device simulators. For very simple problems it takes almost one week to train a QSVM and, in most of these cases a simple SVM with a linear kernel works well. I’ve made some interesting discoveries with small datasets, but I want to try it with practical cases I’m working on. Thus, I’ve been carrying out tests with slightly more complex cases and that’s where the challenges begin.
As I increase the amount of variables in my model I also need to increase the quantity of qubits in my quantum kernel and the more I have the larger my vector space is on an exponential scale and this is a real nightmare for my RAM memory. I know I can reduce this problem by applying an Amplitude Embedding and I’ve tried, but that’s just one of the issues. The second problem I have is the size of my dataset. The bigger it is, the longer it takes to process and more RAM is consumed. Simulations take nearly a month and I need to use larger machines.
Thereby, my tests take a long time, so my learning rate is slow and I need to be patient.
Quantum Natural Language Processing (QNLP)
In this post I briefly introduced some QNLP concepts and since then I’ve been studying and doing some tests with simple problems. Here the theory is very dense, as we use a diagramatical model that transforms sentences into a mathematical model, which is then converted into a parameterizable quantum circuit, which is tuned by machine learning techniques.
There are a lot of concepts in the last paragraph and it takes time to understand them. I really like this framework because it goes in a different path compared to classic NLP techniques. It is less about using raw compute power like we use with huge neural networks and more about building a framework where the grammar is translated into a quantum structure. We still don’t know if this approach will outperform LLM techniques, but improvements comes from research and testing.
I’ve been creating some simple QNLP classifiers using this framework with Lambeq, which is a really good Python package for QNLP. This is one way I found to solidify my conceptual learnings.
So, what do you think of this and what challenges are you facing right now in your current QML learning path?