DBA Blogs

Clone Any Voice with AI - Locally Install XTTS Model

Pakistan's First Oracle Blog - Sat, 2023-09-16 21:52

 This video shows in step by step tutorial as how to install and run Coqui XTTS model locally. TTS is a Voice generation model that lets you clone voices into different languages by using just a quick 3-second audio clip.




Commands Used:


!pip install transformers !pip install tts from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True) tts.tts_to_file(text="This is my new cloned voice in AI. If you like, don't forget to subscribe to this channel.", file_path="output.wav", speaker_wav="speaker.wav", language="en")

Categories: DBA Blogs

The Hint OPT_ESTIMATE is 20 years old: can we use it now ?

Tom Kyte - Fri, 2023-09-15 00:06
Hi, I discovered the powerful, not officially documented, hint OPT_ESTIMATE about 12 years ago. Recently I discovered the following officiel document about it: Oracle Support Document 2402821.1 (How To Use Optimizer Hints To Specify Cardinality For Join Operation) My question: is it now a documented hint or should it still not be used in SQL code? Regards Maurice
Categories: DBA Blogs

How to Install Llama 2 on Google Cloud Platform - Step by Step Tutorial

Pakistan's First Oracle Blog - Thu, 2023-09-14 21:42

 This video shows you step by step instructions as how to deploy and run Llama 2 and Code Llama models on GCP in Vertex AI API easily and quickly.



Categories: DBA Blogs

Step by Step Demo of Vertex AI in GCP

Pakistan's First Oracle Blog - Wed, 2023-09-13 20:52

 This tutorial gets your started with GCP Vertex AI Generative AI service in step by step demo.




Commands Used:


gcloud services enable aiplatform.googleapis.com


gcloud iam service-accounts create <Your Service Account Name>


gcloud projects add-iam-policy-binding <Your Project ID> \

    --member=serviceAccount:<Your Service Account Name>@<Your Project ID>.iam.gserviceaccount.com \

    --role=roles/aiplatform.admin

from google.auth.transport.requests import Request

from google.oauth2.service_account import Credentials


key_path='<Your Project ID>.json'


credentials = Credentials.from_service_account_file(

    key_path,

    scopes=['https://www.googleapis.com/auth/cloud-platform'])


if credentials.expired:

    credentials.refresh(Request())

PROJECT_ID = '<Your Project ID>'

REGION = 'us-central1'


!pip install -U google-cloud-aiplatform "shapely<2"

import vertexai


# initialize vertex

vertexai.init(project = PROJECT_ID, location = REGION, credentials = credentials)


from vertexai.language_models import TextGenerationModel


generation_model = TextGenerationModel.from_pretrained("text-bison@001")


prompt = "I want to self manage a bathroom renovation project in my home. \

Please suggest me step by step plan to carry out this project."


print(generation_model.predict(prompt=prompt).text)

Categories: DBA Blogs

AdminClient – ADD CREDENTIAL doesn’t do what you expect!

DBASolved - Wed, 2023-09-13 16:00

  Earlier today, I have been working on a few GoldenGate Obey files that will setup a customer’s environment; that […]

The post AdminClient – ADD CREDENTIAL doesn’t do what you expect! appeared first on DBASolved.

Categories: DBA Blogs

Identify cursors/query with more than x joins involved

Tom Kyte - Wed, 2023-09-13 11:26
Hello, I'd like to write a query based on dba_source/all_identifiers/all_dependencies to find all the queries/cursors that have more than x joins involved. Can you help me ? Regards Hans
Categories: DBA Blogs

How to get unique transaction id of the current transaction?

Tom Kyte - Wed, 2023-09-13 11:26
We have a unique auditing requirement where we want to know all updates to the system by a 5 digit terminal no for a given time period. Each user input coming through the legacy network always has an associated 5 digit terminal no. This user input will be processed by WebLogic application server which may do lots of SQL updates to 10g RAC database. We know that these SQL updates will be logged by Oracle in Redo and Archive logs, which can be mined using Log Miner by giving transaction id (XIDUSN + XIDSLT + XIDSQN). So we were thinking of creating a table that can relate 5 digit terminal no to a series of oracle transaction id like the following: TerminalNo timestamp XIDUSN XIDSLT XIDSQN ---------- --------- ------ ------ ------ However, in order to achive the above, We will need the transaction id of the current transaction (XIDUSN + XIDSLT + XIDSQN) before we commit all updates. Each transaction in the system will insert a row in this table just before commiting, which will enable us to get all updates by a terminal no for a given period. So, how can I get the current transaction id of the transaction? Do you have any other idea for implementing same functionality. Thanks
Categories: DBA Blogs

Gradient Tutorial to Fine Tune LLM for Free - Step by Step

Pakistan's First Oracle Blog - Tue, 2023-09-12 22:42

This video is tutorial of fine-tuning large language model in Gradient using Python in AWS. With Gradient, you can fine tune and get completions on private LLMs with a simple web API. No infrastructure needed. Build private, SOC2 compliant AI applications instantly.





Commands Used:


!pip install transformer

!pip install gradientai --upgrade


import os

os.environ['GRADIENT_ACCESS_TOKEN'] = "<TOKEN>"

os.environ['GRADIENT_WORKSPACE_ID'] = "<Workspace ID>"


from gradientai import Gradient


def main():

  with Gradient() as gradient:

      base_model = gradient.get_base_model(base_model_slug="nous-hermes2")


      new_model_adapter = base_model.create_model_adapter(

          name="My Model"

      )

      print(f"Model Adapter Id {new_model_adapter.id}")

      sample_query = "### Instruction: Who is Fahd Mirza? \n\n### Response:"

      print(f"Asking: {sample_query}")


      # before fine-tuning

      completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output

      print(f"Before fine-tuning): {completion}")


samples = [

    {

        "inputs": "### Instruction: Who is Fahd Mirza? \n\n### Response: Fahd Mirza is a technologist who shares his expertise on YouTube, covering topics such as AI, Cloud, DevOps, and databases."

    },

    {

        "inputs": "### Instruction: Please provide information about Fahd Mirza. \n\n### Response: Fahd Mirza is an experienced cloud engineer, AI enthusiast, and educator who creates educational content on various technical subjects on YouTube."

    },

    {

        "inputs": "### Instruction: What can you tell me about Fahd Mirza? \n\n### Response: Fahd Mirza is a content creator on YouTube, specializing in AI, Cloud, DevOps, and database technologies. He is known for his informative videos."

    },

    {

        "inputs": "### Instruction: Describe Fahd Mirza for me. \n\n### Response: Fahd Mirza is a YouTuber and blogger hailing from Australia, with a strong background in cloud engineering and artificial intelligence."

    },

    {

        "inputs": "### Instruction: Give me an overview of Fahd Mirza. \n\n### Response: Fahd Mirza, based in Australia, is a seasoned cloud engineer and AI specialist who shares his knowledge through YouTube content on topics like AI, Cloud, DevOps, and databases."

    },

    {

        "inputs": "### Instruction: Who exactly is Fahd Mirza? \n\n### Response: Fahd Mirza is an Australian-based content creator known for his YouTube channel, where he covers a wide range of technical subjects, including AI, Cloud, DevOps, and databases."

    },

]


      num_epochs = 5

      count = 0

      while count < num_epochs:

          print(f"Fine-tuning the model, Epoch iteration => {count + 1}")

          new_model_adapter.fine_tune(samples=samples)

          count = count + 1


      # After fine-tuning

      completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output

      print(f"After Fine-Tuning: {completion}")


      new_model_adapter.delete()


if __name__ == "__main__":

    main() 



Categories: DBA Blogs

Migration from WE8ISO8859P15 to AL32UTF8

Tom Kyte - Tue, 2023-09-12 17:26
I would like to know your advice about migrating a database from WE8ISO8859P15 to AL32UTF8. We have a DB running WE8ISO8859P15 properly with a lot of applications, but a client suggests that it would be a good idea to change it so it is the same in their Java applications. These applications are not multilingual and they will never be, so do you think it would have some benefit? or won't be a good idea because of space consumption or other issues?
Categories: DBA Blogs

Processing order of Analytical function and Model clause

Tom Kyte - Mon, 2023-09-11 23:06
Hi, In below article https://oracle.readthedocs.io/en/latest/sql/basics/query-processing-order.html It is written that the order in which clauses are logically processed by Oracle is as follows: FROM -> CONNECT BY -> WHERE -> GROUP BY -> HAVING -> SELECT -> ORDER BY In this order of execution when will analytical functions and MODEL clause will be applied. Is it applied after SELECT ? or how these will be executed ? Thanks, Girish
Categories: DBA Blogs

Error- ORA-12514

Tom Kyte - Fri, 2023-09-08 03:26
Hi, When I start the ords server file I am getting following error (ORA-12514) TNS:Listner does not currently know of service requested in connect descriptor. I have tried connecting using SQL PLUS as sys SQL> conn/ as sysdba Connected to an idle instance. SQL> shutdown immediate Erro: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist I have tried STARTUP command SQL> startup ORA-00093 : pga_aggregate_limit must be between 6000M and 100000G ORA-01078 : failure in processing system parameters I have set my pga_aggreagte_limit to 4G
Categories: DBA Blogs

Insert query with Where Clause

Tom Kyte - Fri, 2023-09-08 03:26
Respected sir/Mem i am new in oracle database with apex can you please help me i want to insert data into table with Where clause, is it possible in oracle ?
Categories: DBA Blogs

Subquery vs Shared List of Values, performance

Tom Kyte - Fri, 2023-09-08 03:26
We have report grids showing 2 or more (maybe 6) columns of "Who" did what. To fetch and show their real name based on username: Is it better performance to have a subquery for each "Who" column or just return the username in the query column, and in that column Settings, select Based On 'Display Value List of Values' and have a Shared Component fetching all of the users? Grid query: <code> select <other things>, (select first_name || ' ' || last_name from security_users where user_name = vpd.Added_By ) as ADDED_BY, (select first_name || ' ' || last_name from security_users where user_name = vpd.Submitted_By ) as SUBMITTED_BY from VPD </code> OR <code> select <other things>, Added_By, SUBMITTED_BY from VPD </code> with each "Who" column using a Shared Component: <code> select First_Name||' '||Last_Name as Display_Val, USER_NAME AS Return_Val From security_users </code> When using Shared LOV, one thing I notice is that selecting the grid header shows a list of every user name returned from the Shared LOV for Filtering, even though they are not all in the grid results. Currently the number of users (the lookup table) are about 300. Same question but with a lookup table of say 10,000 rows.
Categories: DBA Blogs

Step by Step - How to Install NVIDIA Container Toolkit

Pakistan's First Oracle Blog - Thu, 2023-09-07 21:31

 This video shows step by step guide as how to install and setup NVIDIA Container Toolkit on Ubuntu with Docker.





Commands Used:


      ubuntu-drivers devices

      sudo apt install ubuntu-drivers-common

      ubuntu-drivers devices

      cat /etc/os-release

      sudo apt autoremove nvidia* --purge

     sudo /usr/bin/nvidia-uninstall

     sudo /usr/local/cuda-X.Y/bin/cuda-uninstall

     sudo apt update

     sudo apt upgrade

     sudo ubuntu-drivers autoinstall

     reboot

     curl https://get.docker.com | sh   && sudo systemctl --now enable docker

     distribution=$(. /etc/os-release;echo $ID$VERSION_ID)       && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg       && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list |             sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' |             sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

     sudo apt-get update

     sudo apt-get install -y nvidia-container-toolkit

     sudo nvidia-ctk runtime configure --runtime=docker

     sudo systemctl restart docker

     sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

     sudo groupadd docker

     sudo usermod -aG docker ${USER}

     docker run -d --rm -p 8008:8008 -v perm-storage:/perm_storage --gpus all smallcloud/refact_self_hosting

     sudo docker run -d --rm -p 8008:8008 -v perm-storage:/perm_storage --gpus all smallcloud/refact_self_hosting


Categories: DBA Blogs

Falcon-180B Local Installation on Linux or Windows - Step by Step

Pakistan's First Oracle Blog - Thu, 2023-09-07 20:58

 This is an installation tutorial of Falcon-180B model locally on Linux or Windows with all the steps.




Commands Used:


pip3 install transformers>=4.33.0 optimum>=1.12.0


!git clone https://github.com/PanQiWei/AutoGPTQ

cd AutoGPTQ

! git checkout a7167b1

!pip3 install .



from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline


model_name_or_path = "TheBloke/Falcon-180B-Chat-GPTQ"


# To use a different branch, change revision

# For example: revision="gptq-3bit--1g-actorder_True"

model = AutoModelForCausalLM.from_pretrained(model_name_or_path,

                                             device_map="auto",

                                             revision="main")


tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)


prompt = "What is capital of Australia"

prompt_template=f'''User: {prompt}

Assistant: '''


print("\n\n*** Generate:")


input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()

output = model.generate(inputs=input_ids, do_sample=True, temperature=0.7, max_new_tokens=512)

print(tokenizer.decode(output[0]))


# Inference can also be done using transformers' pipeline


print("*** Pipeline:")

pipe = pipeline(

    "text-generation",

    model=model,

    tokenizer=tokenizer,

    max_new_tokens=512,

    temperature=0.7,

    do_sample=True,

    top_p=0.95,

    repetition_penalty=1.15

)


print(pipe(prompt_template)[0]['generated_text'])

Categories: DBA Blogs

No result from the query when counter exceeds 110000000

Tom Kyte - Thu, 2023-09-07 09:06
Hello, The below query does not return any value when the value of the counter is equal or greater than 110000000. NOTE: It works well for 6100 or 19 000 000 as example. <code>select to_char(to_number(counter)) from (select level counter from dual connect by level <= 120000000) where counter =(SELECT SUM(peevf.screen_entry_value) FROM pay_element_entries_f peef, pay_element_entry_values_f peevf, pay_input_values_f pivf, pay_sub_classn_rules_f pscrf, pay_ele_classifications_tl pect WHERE peef.element_entry_id = peevf.element_entry_id AND peef.effective_start_date BETWEEN TO_DATE('01/01/2021','DD/MM/YYYY') AND TO_DATE('31/12/2021','DD/MM/YYYY') AND peef.person_id = <REPLACE with PersonID> AND peevf.input_value_id = pivf.input_value_id AND pivf.base_name = 'Amount' AND peef.element_type_id = pscrf.element_type_id AND pscrf.classification_id = pect.classification_id AND pect.LANGUAGE = 'US' AND pect.classification_name = 'TOTAL_VARIABLE_COMPENSATION')</code> Is this a limitation from the "select level counter from dual connect by level" ? NOTE: to replace with personID in the query: <REPLACE with PersonID>
Categories: DBA Blogs

Effects of frequent redo log switches in active data guard set up

Tom Kyte - Wed, 2023-09-06 14:46
Howdy, Just curious what (if any) deleterious effects we would see in the standby databases in an active data guard set up from switching redo logs every 2-3 minutes on the primary? I?m also curious what would be the best diagnostic methods to troubleshoot issues on read only standby databases. Thanks !
Categories: DBA Blogs

Strange Oracle ID allocation behaviour

Tom Kyte - Tue, 2023-09-05 20:26
I'll summarise this query with a (hopefully easy to follow) scenario: We have two tables: TableOne TableTwo Both have identically defined ID columns ("ID" NUMBER GENERATED BY DEFAULT AS IDENTITY MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE NOKEEP NOSCALE NOT NULL ENABLE). We perform these actions: We insert a row into each. TableOne's highest ID is now 28. TableTwo's highest ID is now 1620. We commit. We wait four months. We insert a new row into each. TableOne skips several IDs (ie. the sequence goes from 28 directly to 41) TableTwo does not skip any IDs (ie. the sequence goes from 1620 to 1621) And my question is, why do the two tables behave differently?
Categories: DBA Blogs

Text to Audio AI Local Tool Free Installation - AUDIOLM 2

Pakistan's First Oracle Blog - Tue, 2023-09-05 03:14

 This video is a step by step guide as how to install AudioLDM 2 locally in AWS to convert images to video in Ubuntu. AudioLDM support Text-to-Audio (including Music) and Text-to-Speech Generation.



Commands Used:

      sudo apt update

      python3 --version

      sudo apt install python3-pip

      export PATH="$HOME/.local/bin:$PATH"

      cd /tmp

      wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh

      sha256sum Anaconda3-2022.05-Linux-x86_64.sh

      bash Anaconda3-2022.05-Linux-x86_64.sh

      source ~/.bashrc

      conda info

     conda create -n audioldm python=3.8; conda activate audioldm

     pip3 install git+https://github.com/haoheliu/AudioLDM2.git

     git clone https://github.com/haoheliu/AudioLDM2; cd AudioLDM2

     python3 app.py

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs