DBA Blogs
Clone Any Voice with AI - Locally Install XTTS Model
This video shows in step by step tutorial as how to install and run Coqui XTTS model locally. TTS is a Voice generation model that lets you clone voices into different languages by using just a quick 3-second audio clip.
Commands Used:
!pip install transformers !pip install tts from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True) tts.tts_to_file(text="This is my new cloned voice in AI. If you like, don't forget to subscribe to this channel.", file_path="output.wav", speaker_wav="speaker.wav", language="en")
The Hint OPT_ESTIMATE is 20 years old: can we use it now ?
How to Install Llama 2 on Google Cloud Platform - Step by Step Tutorial
This video shows you step by step instructions as how to deploy and run Llama 2 and Code Llama models on GCP in Vertex AI API easily and quickly.
Step by Step Demo of Vertex AI in GCP
This tutorial gets your started with GCP Vertex AI Generative AI service in step by step demo.
Commands Used:
gcloud services enable aiplatform.googleapis.com
gcloud iam service-accounts create <Your Service Account Name>
gcloud projects add-iam-policy-binding <Your Project ID> \
--member=serviceAccount:<Your Service Account Name>@<Your Project ID>.iam.gserviceaccount.com \
--role=roles/aiplatform.admin
from google.auth.transport.requests import Request
from google.oauth2.service_account import Credentials
key_path='<Your Project ID>.json'
credentials = Credentials.from_service_account_file(
key_path,
scopes=['https://www.googleapis.com/auth/cloud-platform'])
if credentials.expired:
credentials.refresh(Request())
PROJECT_ID = '<Your Project ID>'
REGION = 'us-central1'
!pip install -U google-cloud-aiplatform "shapely<2"
import vertexai
# initialize vertex
vertexai.init(project = PROJECT_ID, location = REGION, credentials = credentials)
from vertexai.language_models import TextGenerationModel
generation_model = TextGenerationModel.from_pretrained("text-bison@001")
prompt = "I want to self manage a bathroom renovation project in my home. \
Please suggest me step by step plan to carry out this project."
print(generation_model.predict(prompt=prompt).text)
AdminClient – ADD CREDENTIAL doesn’t do what you expect!
Earlier today, I have been working on a few GoldenGate Obey files that will setup a customer’s environment; that […]
The post AdminClient – ADD CREDENTIAL doesn’t do what you expect! appeared first on DBASolved.
Identify cursors/query with more than x joins involved
How to get unique transaction id of the current transaction?
Gradient Tutorial to Fine Tune LLM for Free - Step by Step
This video is tutorial of fine-tuning large language model in Gradient using Python in AWS. With Gradient, you can fine tune and get completions on private LLMs with a simple web API. No infrastructure needed. Build private, SOC2 compliant AI applications instantly.
Commands Used:
!pip install transformer
!pip install gradientai --upgrade
import os
os.environ['GRADIENT_ACCESS_TOKEN'] = "<TOKEN>"
os.environ['GRADIENT_WORKSPACE_ID'] = "<Workspace ID>"
from gradientai import Gradient
def main():
with Gradient() as gradient:
base_model = gradient.get_base_model(base_model_slug="nous-hermes2")
new_model_adapter = base_model.create_model_adapter(
name="My Model"
)
print(f"Model Adapter Id {new_model_adapter.id}")
sample_query = "### Instruction: Who is Fahd Mirza? \n\n### Response:"
print(f"Asking: {sample_query}")
# before fine-tuning
completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output
print(f"Before fine-tuning): {completion}")
samples = [
{
"inputs": "### Instruction: Who is Fahd Mirza? \n\n### Response: Fahd Mirza is a technologist who shares his expertise on YouTube, covering topics such as AI, Cloud, DevOps, and databases."
},
{
"inputs": "### Instruction: Please provide information about Fahd Mirza. \n\n### Response: Fahd Mirza is an experienced cloud engineer, AI enthusiast, and educator who creates educational content on various technical subjects on YouTube."
},
{
"inputs": "### Instruction: What can you tell me about Fahd Mirza? \n\n### Response: Fahd Mirza is a content creator on YouTube, specializing in AI, Cloud, DevOps, and database technologies. He is known for his informative videos."
},
{
"inputs": "### Instruction: Describe Fahd Mirza for me. \n\n### Response: Fahd Mirza is a YouTuber and blogger hailing from Australia, with a strong background in cloud engineering and artificial intelligence."
},
{
"inputs": "### Instruction: Give me an overview of Fahd Mirza. \n\n### Response: Fahd Mirza, based in Australia, is a seasoned cloud engineer and AI specialist who shares his knowledge through YouTube content on topics like AI, Cloud, DevOps, and databases."
},
{
"inputs": "### Instruction: Who exactly is Fahd Mirza? \n\n### Response: Fahd Mirza is an Australian-based content creator known for his YouTube channel, where he covers a wide range of technical subjects, including AI, Cloud, DevOps, and databases."
},
]
num_epochs = 5
count = 0
while count < num_epochs:
print(f"Fine-tuning the model, Epoch iteration => {count + 1}")
new_model_adapter.fine_tune(samples=samples)
count = count + 1
# After fine-tuning
completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output
print(f"After Fine-Tuning: {completion}")
new_model_adapter.delete()
if __name__ == "__main__":
main()
Migration from WE8ISO8859P15 to AL32UTF8
ORA-00942: table or view does not exist on inserting rows with a user other than the table owner
Processing order of Analytical function and Model clause
Error- ORA-12514
Insert query with Where Clause
Subquery vs Shared List of Values, performance
Step by Step - How to Install NVIDIA Container Toolkit
This video shows step by step guide as how to install and setup NVIDIA Container Toolkit on Ubuntu with Docker.
Commands Used:
ubuntu-drivers devices
sudo apt install ubuntu-drivers-common
ubuntu-drivers devices
cat /etc/os-release
sudo apt autoremove nvidia* --purge
sudo /usr/bin/nvidia-uninstall
sudo /usr/local/cuda-X.Y/bin/cuda-uninstall
sudo apt update
sudo apt upgrade
sudo ubuntu-drivers autoinstall
reboot
curl https://get.docker.com | sh && sudo systemctl --now enable docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
sudo groupadd docker
sudo usermod -aG docker ${USER}
docker run -d --rm -p 8008:8008 -v perm-storage:/perm_storage --gpus all smallcloud/refact_self_hosting
sudo docker run -d --rm -p 8008:8008 -v perm-storage:/perm_storage --gpus all smallcloud/refact_self_hosting
Falcon-180B Local Installation on Linux or Windows - Step by Step
This is an installation tutorial of Falcon-180B model locally on Linux or Windows with all the steps.
Commands Used:
pip3 install transformers>=4.33.0 optimum>=1.12.0
!git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
! git checkout a7167b1
!pip3 install .
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Falcon-180B-Chat-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-3bit--1g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "What is capital of Australia"
prompt_template=f'''User: {prompt}
Assistant: '''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, do_sample=True, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
do_sample=True,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
No result from the query when counter exceeds 110000000
Effects of frequent redo log switches in active data guard set up
Strange Oracle ID allocation behaviour
Text to Audio AI Local Tool Free Installation - AUDIOLM 2
This video is a step by step guide as how to install AudioLDM 2 locally in AWS to convert images to video in Ubuntu. AudioLDM support Text-to-Audio (including Music) and Text-to-Speech Generation.
Commands Used:
sudo apt update
python3 --version
sudo apt install python3-pip
export PATH="$HOME/.local/bin:$PATH"
cd /tmp
wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
sha256sum Anaconda3-2022.05-Linux-x86_64.sh
bash Anaconda3-2022.05-Linux-x86_64.sh
source ~/.bashrc
conda info
conda create -n audioldm python=3.8; conda activate audioldm
pip3 install git+https://github.com/haoheliu/AudioLDM2.git
git clone https://github.com/haoheliu/AudioLDM2; cd AudioLDM2
python3 app.py
Pages
