{"id":19132,"date":"2026-03-06T12:19:42","date_gmt":"2026-03-06T12:19:42","guid":{"rendered":"https:\/\/ideainthebox.com\/?p=19132"},"modified":"2026-03-06T12:22:32","modified_gmt":"2026-03-06T12:22:32","slug":"utilizzare-un-vecchio-cellulare-come-server-ai","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/03\/06\/utilizzare-un-vecchio-cellulare-come-server-ai\/","title":{"rendered":"Utilizzare un vecchio cellulare come SERVER AI"},"content":{"rendered":"<p>Ecco una guida completa per trasformare il tuo <strong>Samsung Galaxy Note 9<\/strong> in un vero server AI locale con Qwen.<\/p>\n<hr \/>\n<h3>\u26a0\ufe0f Prima cosa: realt\u00e0 sull&#8217;hardware del Note 9<\/h3>\n<p>Il Note 9 ha Snapdragon 845 (o Exynos 9810 in Europa), con 6 GB o 8 GB di RAM. Questo significa che puoi girare modelli <strong>piccoli e quantizzati<\/strong> (0.6B, 1.7B, max 4B) con prestazioni limitate ma funzionali. Non aspettarti GPT-4, ma per un server AI personale offline funziona.<\/p>\n<hr \/>\n<h3>\ud83d\udee0\ufe0f Metodo 1 \u2014 Ollama + Termux (pi\u00f9 semplice)<\/h3>\n<h4>1. Installa Termux<\/h4>\n<p>La versione di Termux sul Google Play Store \u00e8 obsoleta \u2014 scaricala da <strong>F-Droid<\/strong> o direttamente da GitHub (github.com\/termux\/termux-app\/releases).<\/p>\n<h4>2. Aggiorna Termux e installa Ollama<\/h4>\n<pre><code>pkg update &amp;&amp; pkg upgrade -y\r\npkg install ollama\r\n<\/code><\/pre>\n<h4>3. Avvia il server Ollama in background<\/h4>\n<pre><code># In una sessione Termux\r\nollama serve &amp;\r\n<\/code><\/pre>\n<h4>4. Scarica Qwen3 (modelli consigliati per 6\/8 GB RAM)<\/h4>\n<p>Puoi scegliere la dimensione pi\u00f9 adatta: <code>ollama run qwen3:0.6b<\/code> oppure <code>ollama run qwen3:1.7b<\/code><\/p>\n<pre><code>ollama pull qwen3:0.6b    # ~400 MB \u2014 veloce\r\nollama pull qwen3:1.7b    # ~1.1 GB \u2014 migliore qualit\u00e0\r\n<\/code><\/pre>\n<p>Con 8 GB di RAM puoi osare anche il <code>qwen3:4b<\/code> (~2.5 GB).<\/p>\n<h4>5. Ollama \u00e8 gi\u00e0 un server OpenAI-compatibile<\/h4>\n<p>Ollama espone automaticamente <code>http:\/\/localhost:11434<\/code> con endpoint compatibili con OpenAI. Puoi testarlo con:<\/p>\n<pre><code>curl http:\/\/localhost:11434\/v1\/chat\/completions \r\n  -H \"Content-Type: application\/json\" \r\n  -d '{\"model\":\"qwen3:1.7b\",\"messages\":[{\"role\":\"user\",\"content\":\"Ciao!\"}]}'\r\n<\/code><\/pre>\n<hr \/>\n<h3>\ud83d\udee0\ufe0f Metodo 2 \u2014 llama.cpp diretto (pi\u00f9 controllo)<\/h3>\n<p>Se vuoi pi\u00f9 controllo o usare Qwen3.5:<\/p>\n<pre><code>pkg update &amp;&amp; pkg upgrade -y\r\npkg install git cmake clang\r\n# Clona llama.cpp\r\ngit clone https:\/\/github.com\/ggml-org\/llama.cpp\r\ncd llama.cpp\r\ncmake -B build\r\ncmake --build build --config Release -j4\r\n# Scarica il modello GGUF da HuggingFace\r\npkg install wget\r\nwget https:\/\/huggingface.co\/Qwen\/Qwen3-1.7B-GGUF\/resolve\/main\/qwen3-1.7b-q4_k_m.gguf\r\n<\/code><\/pre>\n<h4>Avvia come server HTTP (API OpenAI-compatibile):<\/h4>\n<pre><code>.\/build\/bin\/llama-server \r\n  -m qwen3-1.7b-q4_k_m.gguf \r\n  --port 8080 \r\n  --ctx-size 4096 \r\n  --host 0.0.0.0\r\n<\/code><\/pre>\n<p>Avviando il server con <code>--host 0.0.0.0<\/code>, potrai accedere alle sue capacit\u00e0 AI non solo dal telefono stesso ma anche da altri dispositivi sulla stessa rete, come altri app o terminali.<\/p>\n<hr \/>\n<h4>\ud83c\udf10 Renderlo accessibile dalla rete locale<\/h4>\n<p>Una volta avviato il server, qualsiasi device sulla stessa Wi-Fi pu\u00f2 usarlo come backend AI:<\/p>\n<pre><code>http:\/\/IP_DEL_NOTE9:8080\/v1\/chat\/completions\r\n<\/code><\/pre>\n<p>Puoi trovare l&#8217;IP del telefono con:<\/p>\n<pre><code>ip addr show wlan0\r\n<\/code><\/pre>\n<hr \/>\n<h3>\ud83d\udca1 Trucchi importanti per Android<\/h3>\n<ul>\n<li><strong>Disabilita l&#8217;ottimizzazione batteria<\/strong> per Termux nelle impostazioni Android, altrimenti il processo viene killato<\/li>\n<li>Disabilita le restrizioni &#8220;phantom process&#8221; nelle opzioni sviluppatore del telefono<\/li>\n<li>Usa una <strong>microSD veloce<\/strong> per salvare i modelli (il Note 9 supporta fino a 1 TB)<\/li>\n<li>Tieni il telefono in carica durante l&#8217;uso intensivo<\/li>\n<li>Con 6 GB di RAM, chiudi tutte le altre app prima di avviare il server<\/li>\n<\/ul>\n<hr \/>\n<h3>\ud83d\udcca Aspettative realistiche<\/h3>\n<table>\n<thead>\n<tr>\n<th>Modello<\/th>\n<th>RAM usata<\/th>\n<th>Token\/sec (CPU)<\/th>\n<th>Qualit\u00e0<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Qwen3-0.6B<\/td>\n<td>~1 GB<\/td>\n<td>8-15 t\/s<\/td>\n<td>base<\/td>\n<\/tr>\n<tr>\n<td>Qwen3-1.7B<\/td>\n<td>~2 GB<\/td>\n<td>4-8 t\/s<\/td>\n<td>buona<\/td>\n<\/tr>\n<tr>\n<td>Qwen3-4B<\/td>\n<td>~4 GB<\/td>\n<td>2-4 t\/s<\/td>\n<td>ottima<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Per Qwen3.5 (il pi\u00f9 recente), il modello da 2B \u00e8 cos\u00ec compresso da poter girare su smartphone, mentre le varianti pi\u00f9 grandi richiedono hardware pi\u00f9 potente.<\/p>\n<p>#!\/data\/data\/com.termux\/files\/usr\/bin\/bash<br \/>\n# \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557<br \/>\n# \u2551 NOTE 9 AI SERVER \u2014 INSTALLER v1.0 \u2551<br \/>\n# \u2551 by Claude \u2022 Qwen3 su Termux + llama.cpp \u2551<br \/>\n# \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d<\/p>\n<p># \u2500\u2500\u2500 Colori \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nRED=&#8217;33[0;31m&#8217;<br \/>\nGREEN=&#8217;33[0;32m&#8217;<br \/>\nYELLOW=&#8217;33[1;33m&#8217;<br \/>\nCYAN=&#8217;33[0;36m&#8217;<br \/>\nBOLD=&#8217;33[1m&#8217;<br \/>\nRESET=&#8217;33[0m&#8217;<\/p>\n<p># \u2500\u2500\u2500 Variabili configurabili \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nSERVER_PORT=8080<br \/>\nMODEL_DIR=&#8221;$HOME\/ai_models&#8221;<br \/>\nLOG_FILE=&#8221;$HOME\/ai_server.log&#8221;<br \/>\nSERVICE_SCRIPT=&#8221;$HOME\/start_ai_server.sh&#8221;<\/p>\n<p># \u2500\u2500\u2500 Funzioni di stampa \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nprint_banner() {<br \/>\necho -e &#8220;${CYAN}&#8221;<br \/>\necho &#8221; \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557&#8221;<br \/>\necho &#8221; \u2551 \ud83d\udcf1 NOTE 9 \u2014 AI SERVER INSTALLER \u2551&#8221;<br \/>\necho &#8221; \u2551 Qwen3 \u2022 llama.cpp \u2022 API \u2551&#8221;<br \/>\necho &#8221; \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d&#8221;<br \/>\necho -e &#8220;${RESET}&#8221;<br \/>\n}<\/p>\n<p>step() { echo -e &#8220;n${CYAN}${BOLD}[STEP $1]${RESET} $2&#8221;; }<br \/>\nok() { echo -e &#8220;${GREEN} \u2714 $1${RESET}&#8221;; }<br \/>\nwarn() { echo -e &#8220;${YELLOW} \u26a0 $1${RESET}&#8221;; }<br \/>\nerr() { echo -e &#8220;${RED} \u2717 $1${RESET}&#8221;; }<br \/>\ninfo() { echo -e &#8221; \u2192 $1&#8243;; }<\/p>\n<p># \u2500\u2500\u2500 Selezione modello \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nchoose_model() {<br \/>\necho -e &#8220;n${BOLD}Scegli il modello Qwen3 da installare:${RESET}&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho &#8221; [1] Qwen3-0.6B (~400 MB) \u2014 Leggero, veloce, RAM &lt; 2 GB&#8221;<br \/>\necho &#8221; [2] Qwen3-1.7B (~1.1 GB) \u2014 Bilanciato \u2605 Consigliato per Note 9&#8243;<br \/>\necho &#8221; [3] Qwen3-4B (~2.5 GB) \u2014 Ottima qualit\u00e0, serve 8 GB RAM&#8221;<br \/>\necho &#8221; [4] Qwen3-8B (~5.0 GB) \u2014 Solo con RAM \u2265 8 GB + storage libero&#8221;<br \/>\necho &#8220;&#8221;<br \/>\nread -p &#8221; Scelta [1-4] (default: 2): &#8221; MODEL_CHOICE<br \/>\nMODEL_CHOICE=${MODEL_CHOICE:-2}<\/p>\n<p>case $MODEL_CHOICE in<br \/>\n1)<br \/>\nMODEL_SIZE=&#8221;0.6b&#8221;<br \/>\nMODEL_FILE=&#8221;qwen3-0.6b-q4_k_m.gguf&#8221;<br \/>\nMODEL_URL=&#8221;https:\/\/huggingface.co\/Qwen\/Qwen3-0.6B-GGUF\/resolve\/main\/qwen3-0.6b-q4_k_m.gguf&#8221;<br \/>\n;;<br \/>\n2)<br \/>\nMODEL_SIZE=&#8221;1.7b&#8221;<br \/>\nMODEL_FILE=&#8221;qwen3-1.7b-q4_k_m.gguf&#8221;<br \/>\nMODEL_URL=&#8221;https:\/\/huggingface.co\/Qwen\/Qwen3-1.7B-GGUF\/resolve\/main\/qwen3-1.7b-q4_k_m.gguf&#8221;<br \/>\n;;<br \/>\n3)<br \/>\nMODEL_SIZE=&#8221;4b&#8221;<br \/>\nMODEL_FILE=&#8221;qwen3-4b-q4_k_m.gguf&#8221;<br \/>\nMODEL_URL=&#8221;https:\/\/huggingface.co\/Qwen\/Qwen3-4B-GGUF\/resolve\/main\/qwen3-4b-q4_k_m.gguf&#8221;<br \/>\n;;<br \/>\n4)<br \/>\nMODEL_SIZE=&#8221;8b&#8221;<br \/>\nMODEL_FILE=&#8221;qwen3-8b-q4_k_m.gguf&#8221;<br \/>\nMODEL_URL=&#8221;https:\/\/huggingface.co\/Qwen\/Qwen3-8B-GGUF\/resolve\/main\/qwen3-8b-q4_k_m.gguf&#8221;<br \/>\n;;<br \/>\n*)<br \/>\nwarn &#8220;Scelta non valida. Uso modello 1.7B di default.&#8221;<br \/>\nMODEL_SIZE=&#8221;1.7b&#8221;<br \/>\nMODEL_FILE=&#8221;qwen3-1.7b-q4_k_m.gguf&#8221;<br \/>\nMODEL_URL=&#8221;https:\/\/huggingface.co\/Qwen\/Qwen3-1.7B-GGUF\/resolve\/main\/qwen3-1.7b-q4_k_m.gguf&#8221;<br \/>\n;;<br \/>\nesac<\/p>\n<p>ok &#8220;Modello selezionato: Qwen3-${MODEL_SIZE}&#8221;<br \/>\n}<\/p>\n<p># \u2500\u2500\u2500 Controllo RAM disponibile \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\ncheck_ram() {<br \/>\nTOTAL_RAM_KB=$(grep MemTotal \/proc\/meminfo | awk &#8216;{print $2}&#8217;)<br \/>\nTOTAL_RAM_GB=$(echo &#8220;scale=1; $TOTAL_RAM_KB \/ 1024 \/ 1024&#8221; | bc)<br \/>\nFREE_RAM_KB=$(grep MemAvailable \/proc\/meminfo | awk &#8216;{print $2}&#8217;)<br \/>\nFREE_RAM_GB=$(echo &#8220;scale=1; $FREE_RAM_KB \/ 1024 \/ 1024&#8221; | bc)<\/p>\n<p>info &#8220;RAM totale: ${TOTAL_RAM_GB} GB | RAM libera: ${FREE_RAM_GB} GB&#8221;<\/p>\n<p>if [ &#8220;$TOTAL_RAM_KB&#8221; -lt 4000000 ]; then<br \/>\nwarn &#8220;Meno di 4 GB RAM! Il modello 0.6B \u00e8 l&#8217;unica opzione sicura.&#8221;<br \/>\nfi<br \/>\n}<\/p>\n<p># \u2500\u2500\u2500 Controllo storage \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\ncheck_storage() {<br \/>\nFREE_STORAGE=$(df -h &#8220;$HOME&#8221; | awk &#8216;NR==2{print $4}&#8217;)<br \/>\ninfo &#8220;Storage libero in HOME: $FREE_STORAGE&#8221;<br \/>\n}<\/p>\n<p># \u2500\u2500\u2500 Step 1: Aggiornamento pacchetti \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 1 &#8220;Aggiornamento repository Termux&#8230;&#8221;<br \/>\npkg update -y 2&gt;&amp;1 | tail -3<br \/>\npkg upgrade -y 2&gt;&amp;1 | tail -3<br \/>\nok &#8220;Repository aggiornato&#8221;<\/p>\n<p># \u2500\u2500\u2500 Step 2: Dipendenze base \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 2 &#8220;Installazione dipendenze (git, cmake, clang, wget, bc)&#8230;&#8221;<br \/>\npkg install -y git cmake clang wget bc make 2&gt;&amp;1 | tail -5<br \/>\nok &#8220;Dipendenze installate&#8221;<\/p>\n<p># \u2500\u2500\u2500 Step 3: Info sistema \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 3 &#8220;Analisi hardware del dispositivo&#8230;&#8221;<br \/>\ncheck_ram<br \/>\ncheck_storage<br \/>\nCPU_CORES=$(nproc)<br \/>\ninfo &#8220;Core CPU disponibili: $CPU_CORES&#8221;<\/p>\n<p># \u2500\u2500\u2500 Step 4: Scelta modello \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 4 &#8220;Selezione modello AI&#8230;&#8221;<br \/>\nchoose_model<\/p>\n<p># \u2500\u2500\u2500 Step 5: Compilazione llama.cpp \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 5 &#8220;Download e compilazione llama.cpp&#8230;&#8221;<\/p>\n<p>if [ -d &#8220;$HOME\/llama.cpp&#8221; ]; then<br \/>\nwarn &#8220;Cartella llama.cpp gi\u00e0 esistente. Aggiornamento&#8230;&#8221;<br \/>\ncd &#8220;$HOME\/llama.cpp&#8221; &amp;&amp; git pull 2&gt;&amp;1 | tail -3<br \/>\nelse<br \/>\ninfo &#8220;Clonazione repository llama.cpp&#8230;&#8221;<br \/>\ngit clone https:\/\/github.com\/ggml-org\/llama.cpp &#8220;$HOME\/llama.cpp&#8221; 2&gt;&amp;1 | tail -5<br \/>\nfi<\/p>\n<p>cd &#8220;$HOME\/llama.cpp&#8221;<\/p>\n<p>info &#8220;Compilazione in corso (pu\u00f2 richiedere 5-10 minuti)&#8230;&#8221;<br \/>\ncmake -B build -DLLAMA_CURL=OFF 2&gt;&amp;1 | tail -3<br \/>\ncmake &#8211;build build &#8211;config Release -j&#8221;$CPU_CORES&#8221; 2&gt;&amp;1 | tail -5<\/p>\n<p>if [ -f &#8220;$HOME\/llama.cpp\/build\/bin\/llama-server&#8221; ]; then<br \/>\nok &#8220;llama.cpp compilato con successo!&#8221;<br \/>\nelse<br \/>\nerr &#8220;Compilazione fallita. Controlla il log.&#8221;<br \/>\nexit 1<br \/>\nfi<\/p>\n<p># \u2500\u2500\u2500 Step 6: Download modello \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 6 &#8220;Download modello Qwen3-${MODEL_SIZE}&#8230;&#8221;<\/p>\n<p>mkdir -p &#8220;$MODEL_DIR&#8221;<br \/>\nMODEL_PATH=&#8221;$MODEL_DIR\/$MODEL_FILE&#8221;<\/p>\n<p>if [ -f &#8220;$MODEL_PATH&#8221; ]; then<br \/>\nok &#8220;Modello gi\u00e0 presente: $MODEL_PATH&#8221;<br \/>\nelse<br \/>\ninfo &#8220;Download da HuggingFace&#8230; (potrebbe volerci del tempo)&#8221;<br \/>\nwget -q &#8211;show-progress -O &#8220;$MODEL_PATH&#8221; &#8220;$MODEL_URL&#8221;<\/p>\n<p>if [ $? -eq 0 ]; then<br \/>\nok &#8220;Modello scaricato: $MODEL_PATH&#8221;<br \/>\nelse<br \/>\nerr &#8220;Download fallito! Verifica connessione internet.&#8221;<br \/>\nrm -f &#8220;$MODEL_PATH&#8221;<br \/>\nexit 1<br \/>\nfi<br \/>\nfi<\/p>\n<p># \u2500\u2500\u2500 Step 7: Generazione script di avvio \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 7 &#8220;Creazione script di avvio server&#8230;&#8221;<\/p>\n<p>cat &gt; &#8220;$SERVICE_SCRIPT&#8221; &lt;&lt; STARTSCRIPT<br \/>\n#!\/data\/data\/com.termux\/files\/usr\/bin\/bash<br \/>\n# \u2500\u2500\u2500 AI SERVER AVVIO RAPIDO \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<\/p>\n<p>MODEL_PATH=&#8221;$MODEL_PATH&#8221;<br \/>\nSERVER_BIN=&#8221;$HOME\/llama.cpp\/build\/bin\/llama-server&#8221;<br \/>\nPORT=$SERVER_PORT<br \/>\nLOG=&#8221;$LOG_FILE&#8221;<\/p>\n<p>echo &#8220;\ud83e\udd16 Avvio AI Server su porta $PORT&#8230;&#8221;<br \/>\necho &#8221; Modello: $(basename $MODEL_PATH)&#8221;<br \/>\necho &#8221; Endpoint: http:\/\/$(ip route get 1 | awk &#8216;{print $7; exit}&#8217;):$PORT&#8221;<br \/>\necho &#8221; API: http:\/\/localhost:$PORT\/v1\/chat\/completions&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho &#8221; Premi CTRL+C per fermare il server&#8221;<br \/>\necho &#8220;&#8221;<\/p>\n<p>$SERVER_BIN<br \/>\n&#8211;model &#8220;$MODEL_PATH&#8221;<br \/>\n&#8211;port &#8220;$PORT&#8221;<br \/>\n&#8211;host 0.0.0.0<br \/>\n&#8211;ctx-size 4096<br \/>\n&#8211;n-predict 512<br \/>\n&#8211;threads $(nproc)<br \/>\n&#8211;log-file &#8220;$LOG&#8221;<br \/>\n-ngl 0<\/p>\n<p>STARTSCRIPT<\/p>\n<p>chmod +x &#8220;$SERVICE_SCRIPT&#8221;<br \/>\nok &#8220;Script di avvio creato: $SERVICE_SCRIPT&#8221;<\/p>\n<p># \u2500\u2500\u2500 Step 8: Script di test \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 8 &#8220;Creazione script di test API&#8230;&#8221;<\/p>\n<p>cat &gt; &#8220;$HOME\/test_ai.sh&#8221; &lt;&lt; &#8216;TESTSCRIPT&#8217;<br \/>\n#!\/data\/data\/com.termux\/files\/usr\/bin\/bash<br \/>\nPORT=8080<br \/>\necho &#8220;\ud83e\uddea Test chiamata API al server locale&#8230;&#8221;<br \/>\ncurl -s http:\/\/localhost:$PORT\/v1\/chat\/completions<br \/>\n-H &#8220;Content-Type: application\/json&#8221;<br \/>\n-d &#8216;{<br \/>\n&#8220;model&#8221;: &#8220;qwen3&#8221;,<br \/>\n&#8220;messages&#8221;: [{&#8220;role&#8221;:&#8221;user&#8221;,&#8221;content&#8221;:&#8221;Rispondi in italiano: chi sei e cosa puoi fare?&#8221;}],<br \/>\n&#8220;max_tokens&#8221;: 200<br \/>\n}&#8217; | python3 -c &#8221;<br \/>\nimport sys, json<br \/>\ntry:<br \/>\ndata = json.load(sys.stdin)<br \/>\nmsg = data[&#8216;choices&#8217;][0][&#8216;message&#8217;][&#8216;content&#8217;]<br \/>\nprint(&#8216;n\u2705 Risposta AI:n&#8217;)<br \/>\nprint(msg)<br \/>\nexcept Exception as e:<br \/>\nprint(&#8216;\u274c Errore:&#8217;, e)<br \/>\nprint(&#8216;Raw:&#8217;, sys.stdin.read())<br \/>\n&#8221;<br \/>\nTESTSCRIPT<\/p>\n<p>chmod +x &#8220;$HOME\/test_ai.sh&#8221;<br \/>\nok &#8220;Script di test creato: ~\/test_ai.sh&#8221;<\/p>\n<p># \u2500\u2500\u2500 Step 9: Alias utili \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\nstep 9 &#8220;Configurazione alias rapidi in ~\/.bashrc&#8230;&#8221;<\/p>\n<p>grep -q &#8220;# AI Server aliases&#8221; &#8220;$HOME\/.bashrc&#8221; 2&gt;\/dev\/null || cat &gt;&gt; &#8220;$HOME\/.bashrc&#8221; &lt;&lt; &#8216;ALIASES&#8217;<\/p>\n<p># AI Server aliases<br \/>\nalias ai-start=&#8217;bash ~\/start_ai_server.sh&#8217;<br \/>\nalias ai-test=&#8217;bash ~\/test_ai.sh&#8217;<br \/>\nalias ai-log=&#8217;tail -f ~\/ai_server.log&#8217;<br \/>\nalias ai-ip=&#8217;ip route get 1 | awk &#8220;{print $7; exit}&#8221;&#8216;<br \/>\nALIASES<\/p>\n<p>ok &#8220;Alias aggiunti: ai-start, ai-test, ai-log, ai-ip&#8221;<\/p>\n<p># \u2500\u2500\u2500 Riepilogo finale \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500<br \/>\necho &#8220;&#8221;<br \/>\necho -e &#8220;${GREEN}${BOLD}&#8221;<br \/>\necho &#8221; \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557&#8221;<br \/>\necho &#8221; \u2551 \u2705 INSTALLAZIONE COMPLETATA! \u2551&#8221;<br \/>\necho &#8221; \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d&#8221;<br \/>\necho -e &#8220;${RESET}&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho -e &#8221; ${BOLD}COMANDI DISPONIBILI:${RESET}&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho -e &#8221; ${CYAN}ai-start${RESET} \u2192 Avvia il server AI&#8221;<br \/>\necho -e &#8221; ${CYAN}ai-test${RESET} \u2192 Testa che il server risponda&#8221;<br \/>\necho -e &#8221; ${CYAN}ai-log${RESET} \u2192 Mostra i log in tempo reale&#8221;<br \/>\necho -e &#8221; ${CYAN}ai-ip${RESET} \u2192 Mostra l&#8217;IP del telefono&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho -e &#8221; ${BOLD}ENDPOINT API (OpenAI-compatibile):${RESET}&#8221;<br \/>\necho -e &#8221; ${YELLOW}http:\/\/localhost:$SERVER_PORT\/v1\/chat\/completions${RESET}&#8221;<br \/>\necho -e &#8221; ${YELLOW}http:\/\/$(ip route get 1 2&gt;\/dev\/null | awk &#8216;{print $7; exit}&#8217; || echo &#8216;IP_TELEFONO&#8217;):$SERVER_PORT\/v1\/chat\/completions${RESET}&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho -e &#8221; ${BOLD}AVVIO RAPIDO:${RESET}&#8221;<br \/>\necho -e &#8221; ${GREEN}source ~\/.bashrc &amp;&amp; ai-start${RESET}&#8221;<br \/>\necho &#8220;&#8221;<br \/>\necho -e &#8221; ${BOLD}\u26a0 IMPORTANTE:${RESET}&#8221;<br \/>\necho -e &#8221; Vai in Impostazioni Android \u2192 App \u2192 Termux&#8221;<br \/>\necho -e &#8221; \u2192 Batteria \u2192 disabilita &#8216;Ottimizzazione batteria&#8217;&#8221;<br \/>\necho &#8220;&#8221;<\/p>\n<p>Ecco i due file creati:<\/p>\n<p><strong><code>note9_ai_server_install.sh<\/code><\/strong> \u2014 Lo script bash completo che fa tutto automaticamente:<\/p>\n<ul>\n<li>Aggiorna i pacchetti Termux<\/li>\n<li>Installa tutte le dipendenze (git, cmake, clang, ecc.)<\/li>\n<li>Controlla la RAM disponibile<\/li>\n<li>Ti fa scegliere il modello Qwen3 (0.6B \/ 1.7B \/ 4B \/ 8B)<\/li>\n<li>Compila llama.cpp direttamente sul telefono<\/li>\n<li>Scarica il modello da HuggingFace<\/li>\n<li>Crea <code>start_ai_server.sh<\/code> e <code>test_ai.sh<\/code><\/li>\n<li>Aggiunge gli alias <code>ai-start<\/code>, <code>ai-test<\/code>, <code>ai-log<\/code>, <code>ai-ip<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Come usarlo sul Note 9:<\/strong><\/p>\n<ol>\n<li>Copia lo script <code>.sh<\/code> sul telefono via USB o cloud<\/li>\n<li>Apri <strong>Termux<\/strong> (da F-Droid, non Play Store!)<\/li>\n<li>Esegui:<\/li>\n<\/ol>\n<p>bash<\/p>\n<pre><code>   bash \/sdcard\/Download\/note9_ai_server_install.sh<\/code><\/pre>\n<ol start=\"4\">\n<li>Segui le istruzioni a schermo (~10-15 min per la compilazione)<\/li>\n<li>Poi basta digitare <code>ai-start<\/code> per avviare il server ogni volta<\/li>\n<\/ol>\n<p>Guida interattiva da aprire nel browser del Note 9, con tutti i comandi da copiare con un tap.<\/p>\n<header>\ud83e\udd16<\/p>\n<h1>Note 9 AI Server<\/h1>\n<p>Qwen3 \u2022 llama.cpp \u2022 OpenAI API<\/p>\n<\/header>\n<p>Installa Termux da F-Droid per iniziare<\/p>\n<p><!-- Model Selection -->\u2460 Scegli il modello \u26a1 Qwen3-0.6B ~400 MB \u00b7 8-15 token\/sec \u00b7 RAM min 2 GB Veloce \u2b50 Qwen3-1.7B ~1.1 GB \u00b7 4-8 token\/sec \u00b7 RAM min 3 GB Consigliato \ud83d\udd25 Qwen3-4B ~2.5 GB \u00b7 2-4 token\/sec \u00b7 RAM min 6 GB 6 GB RAM \ud83d\udc8e Qwen3-8B ~5.0 GB \u00b7 1-2 token\/sec \u00b7 RAM 8 GB Solo 8 GB<\/p>\n<p><!-- Steps -->\u2461 Segui i passaggi<\/p>\n<p><!-- Step 1 -->1 Installa Termux \u25bc \u26a0 NON usare la versione del Play Store (obsoleta). Scarica da F-Droid. \ud83c\udf10 Link download<\/p>\n<pre>https:\/\/f-droid.org\/packages\/com.termux\/<\/pre>\n<p><button>copia<\/button><\/p>\n<p>Apri il link nel browser, scarica e installa l&#8217;APK. Abilita &#8220;Sorgenti sconosciute&#8221; se richiesto.<\/p>\n<p><!-- Step 2 -->2 Scarica lo script installer \u25bc<\/p>\n<p>Apri Termux e incolla questo comando:<\/p>\n<p>\ud83d\udcbb Termux<\/p>\n<pre id=\"downloadCmd\">wget -O install_ai.sh \"https:\/\/tuo-server\/note9_ai_server_install.sh\" &amp;&amp; bash install_ai.sh<\/pre>\n<p><button>copia<\/button> \u2713 In alternativa: copia lo script sul telefono via USB e salvalo come <code>~\/install_ai.sh<\/code><\/p>\n<p><!-- Step 3 -->3 Installa manualmente (opzione alternativa) \u25bc \ud83d\udcbb 1 \u2014 Aggiorna e installa dipendenze<\/p>\n<pre>pkg update -y &amp;&amp; pkg upgrade -y\r\npkg install -y git cmake clang wget bc make<\/pre>\n<p><button>copia<\/button> \ud83d\udcbb 2 \u2014 Compila llama.cpp<\/p>\n<pre>git clone https:\/\/github.com\/ggml-org\/llama.cpp ~\/llama.cpp\r\ncd ~\/llama.cpp\r\ncmake -B build -DLLAMA_CURL=OFF\r\ncmake --build build --config Release -j$(nproc)<\/pre>\n<p><button>copia<\/button> \ud83d\udcbb 3 \u2014 Scarica Qwen3-1.7B<\/p>\n<pre id=\"modelCmd\">mkdir -p ~\/ai_models\r\nwget -O ~\/ai_models\/qwen3-1.7b-q4_k_m.gguf \r\n  \"https:\/\/huggingface.co\/Qwen\/Qwen3-1.7B-GGUF\/resolve\/main\/qwen3-1.7b-q4_k_m.gguf\"<\/pre>\n<p><button>copia<\/button> \ud83d\udcbb 4 \u2014 Avvia il server<\/p>\n<pre id=\"serverCmd\">~\/llama.cpp\/build\/bin\/llama-server \r\n  --model ~\/ai_models\/qwen3-1.7b-q4_k_m.gguf \r\n  --port 8080 \r\n  --host 0.0.0.0 \r\n  --ctx-size 4096 \r\n  --threads $(nproc)<\/pre>\n<p><button>copia<\/button><\/p>\n<p><!-- Step 4 -->4 Disabilita ottimizzazione batteria \u25bc \u26a0 Android pu\u00f2 killare Termux in background. Questo step \u00e8 fondamentale!<\/p>\n<p><strong>Impostazioni<\/strong> \u2192 App \u2192 Termux \u2192 Batteria<br \/>\n\u2192 seleziona <strong>&#8220;Nessuna restrizione&#8221;<\/strong><\/p>\n<p>Anche: Impostazioni Sviluppatore \u2192 disabilita <strong>&#8220;Limit phantom processes&#8221;<\/strong><\/p>\n<p><!-- Step 5 -->5 Testa il server \u25bc \ud83d\udcbb Test API<\/p>\n<pre>curl http:\/\/localhost:8080\/v1\/chat\/completions \r\n  -H \"Content-Type: application\/json\" \r\n  -d '{\"model\":\"qwen3\",\"messages\":[{\"role\":\"user\",\"content\":\"Ciao!\"}],\"max_tokens\":100}'<\/pre>\n<p><button>copia<\/button> \u2713 Se vedi una risposta JSON con &#8220;choices&#8221;, il server funziona! \ud83c\udf89<\/p>\n<p><!-- Quick commands -->Comandi rapidi (dopo installazione) <button> ai-start Avvia il server <\/button> <button> ai-test Testa la risposta <\/button> <button> ai-log Vedi i log <\/button> <button> ai-ip Mostra IP rete <\/button><\/p>\n<p><!-- Info box -->Endpoint API (OpenAI-compatibile)<\/p>\n<pre>http:\/\/localhost:8080\/v1\/chat\/completions\r\nhttp:\/\/IP_TELEFONO:8080\/v1\/chat\/completions<\/pre>\n<p><button>copia<\/button><\/p>\n<p style=\"margin-top: 10px;\">Usa questo endpoint come &#8220;base URL&#8221; in qualsiasi app compatibile con OpenAI API (OpenWebUI, Obsidian, ecc.)<\/p>\n<p><!-- Toast -->\u2713 Copiato!<\/p>\n[\/fsn_text][\/fsn_column][\/fsn_row]\n","protected":false},"excerpt":{"rendered":"<p>Ecco una guida completa per trasformare il tuo Samsung Galaxy  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[226],"tags":[],"class_list":["post-19132","post","type-post","status-publish","format-standard","hentry","category-technology"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/19132","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=19132"}],"version-history":[{"count":2,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/19132\/revisions"}],"predecessor-version":[{"id":19134,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/19132\/revisions\/19134"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=19132"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=19132"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=19132"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}