Monday, 18 December 2017

TechTip - Oracle SQL query to replace strings in clob column

If you are looking for oracle query to find and replace a string in clob column in bulk. You can find same tip below:

Create table having clob column.

create table MyClobTable ( column1 int, clob_column clob );


Create procedure to perform insertion in table

create or replace procedure MyProc( proc_column1 in int, proc_text in varchar2 )
as
begin
insert into MyClobTable values ( proc_column1, proc_text );
end;


Insert below two records in table.

exec MyProc(1, 'I am Narendra Verma and currently living in Atlanta. I visited a lot of places in Atlanta. Atlanta is a nice city in US.' );

exec MyProc(2, It is a great time to be in the City of Atlanta' );

commit;

Check if rows are inserted.

select * from MyClobTable;


Output:
1
I am Narendra Verma and currently living in Atlanta. I visited a lot of places in Atlanta. Atlanta is a nice city in US.

2
It is a great time to be in the City of Atlanta

Execute below to replace ‘Atlanta’ with 'Alpharetta' in all rows.

MERGE INTO MyClobTable A
     USING (SELECT column1,
                   TO_CLOB (REPLACE (clob_column, 'Atlanta', 'Alpharetta'))
                      AS updated_string
              FROM MyClobTable
             WHERE clob_column LIKE '%Atlanta%'
            )  B
        ON (A.column1 = B.column1)
WHEN MATCHED
THEN
   UPDATE SET A.clob_column = B.updated_string;


Check if 'Atlanta' word is replaced with 'Alpharetta' in all rows.

select * from MyClobTable;


Output:
1
I am Narendra Verma and currently living in Alpharetta. I visited a lot of places in Alpharetta. Alpharetta is a nice city in US.
2
It is a great time to be in the City of Alpharetta

Hope this tech tip helped you.

Monday, 26 June 2017

Generating and Reading QR Code (Two-Dimensional Barcode)

If you are looking for solution to generate and read QR code in your java application, I think you are visiting right post. In this post I am going to demonstrate how to generate and read bar code in Java.

About QR (Quick Response) Code 

QR Code is a two-dimensional barcode that is readable by smartphones. It allows to encode over 4000 characters in a two dimensional barcode. 

From Wikipedia: A QR code (abbreviated from Quick Response code) is a specific matrix barcode (or two-dimensional code) that is readable by dedicated QR barcode readers, camera telephones, and to a less common extent, computers with webcams. The code consists of black modules arranged in a square pattern on a white background. The information encoded may be text, URL, or other data.

QR codes are plastered on advertisements, billboards, business windows, and products. Now a days, these are being so popular and being utilized by different technical solutions. Paytm is one of the great example which has gained tremendous popularity where you can just scan QR code and pay. With the help of  QR code you can reduce typing effort for your app users. 

Open Source Lib for Barcode Image Processing 

ZXing ("zebra crossing") is an open-source, multi-format 1D/2D barcode image processing library implemented in Java. To get more detail refer this

Using ZXing, its very easy to generate/read QR code. If you are interested to generate/read QR code in your java code, you need to add below dependency in your maven java project:


<dependency>
               <groupId>com.google.zxing</groupId>
               <artifactId>javase</artifactId>
               <version>2.0</version>
</dependency>


How to Generate QR Code?
Here you will find the java example where you can generate QR code for your given string. In this example I am using my blog URL 'http://nverma-tech-blog.blogspot.com/' for which I want to generate QR code.

import java.io.File;
import java.io.FileOutputStream;

import com.google.zxing.BarcodeFormat;
import com.google.zxing.client.j2se.MatrixToImageWriter;
import com.google.zxing.common.BitMatrix;
import com.google.zxing.qrcode.QRCodeWriter;

public class QRCodeGenerator {

       public static void main(String[] args) throws Exception {

// this is the text that we want to encode          
String text = "http://nverma-tech-blog.blogspot.com/";
      
int width = 400;
       int height = 300; // change the height and width as per your requirement

       // (ImageIO.getWriterFormatNames() returns a list of supported formats)
// could be "gif", "tiff", "jpeg"
       String imageFormat = "png";

       BitMatrix bitMatrix =
new QRCodeWriter().encode(text, BarcodeFormat.QR_CODE, width, height);

MatrixToImageWriter.writeToStream(bitMatrix, imageFormat, new FileOutputStream(new File("MyBlogQRCode.png")));
      
}
}


Once you execute this program, it will generate QR code image named 'MyBlogQRCode.png' at the location where your program is executed. If you open QR code image, it will be like below:

Since I have encoded my blog URL 'http://nverma-tech-blog.blogspot.com/' and generated this QR code, if you scan this QR code from your mobile's camera, my blog URL will be opened on browser. 

During writing this post, I did same on my mobile (Motorola X Play) and sharing same flow with you. Ensure that QR code scanner is enabled on your mobile. 

As you scan above QR code on your mobile camera, you will see red marked icon displayed. 




Now click on this icon, two options will be displayed.


Now if you click on View Website, browser is opened and you will see my blog is accessed as per decoded URL 'http://nverma-tech-blog.blogspot.com/' in QRCode.



Now you can read my blog on your mobile too without typing actual URL on your mobile browser :). 

How to Read QR Code?

Here you will find the java example where you can decode your generated QR code and get the same string back which is encoded in QRCode.

import java.awt.image.BufferedImage;
import java.io.FileInputStream;
import java.io.InputStream;
import javax.imageio.ImageIO;
import com.google.zxing.BinaryBitmap;
import com.google.zxing.LuminanceSource;
import com.google.zxing.MultiFormatReader;
import com.google.zxing.Reader;
import com.google.zxing.Result;
import com.google.zxing.client.j2se.BufferedImageLuminanceSource;
import com.google.zxing.common.HybridBinarizer;

public class BarCodeReader {
      
       public static void main(String[] args) throws Exception {

InputStream barCodeInputStream = new FileInputStream("MyBlogQRCode.png");
            
BufferedImage barCodeBufferedImage = ImageIO.read(barCodeInputStream);

LuminanceSource source = new BufferedImageLuminanceSource(barCodeBufferedImage);
            
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
            
Reader reader = new MultiFormatReader();
             Result result = reader.decode(bitmap);

             System.out.println("Decoded barcode text is - " + result.getText());
            
       }
}


Output of this program is


Decoded barcode text is - http://nverma-tech-blog.blogspot.com/


Others
You can find other good references below on QR code:

Hope this post help you. If you have any question or feedback do write comment. I will try to assist you.


Sunday, 28 May 2017

Technical Events

GIDS established itself as the gold standard conference and expo for the software practitioner ecosystem. Over 45,000 attendees have benefited from the platform since its founding in 2008. I have got an opportunity to attend world class summit GIDS-2017 first time. I was able to meet lot of enthusiastic developers from different parts of the country. Overall I have experienced great learning, wonderful speakers and cool products showcased in this summit. I would like to share my GIDS learning experience in this blog. Read more... 

GIDS (Great Indian Developer Summit) 2017 - My Experience

GIDS has established itself as the gold standard conference and expo for the software practitioner ecosystem. Over 45,000 attendees have benefited from the platform since its founding in 2008. I have got an opportunity to attend world class summit GIDS-2017 first time. I was able to meet lot of enthusiastic developers from different parts of the country. Overall I have experienced great learning, wonderful speakers and cool products showcased in this summit. I would like to share my GIDS learning experience in this blog.  Also, I am curious to share my photo that I clicked during lunch time in GIDs Bangalore




It was a five day summit and each day has a dedicated stream to be presented like Web & Mobile, Java & Dynamic Lang,  Data & Cloud,  Devops & Architecture. I have attended Devops & Architecture stream where there were around 20-30 presentations based on this stream. Also there were different companies who showcased their products and new ideas. Listing couple of those products and providing references to get more detail.

  • Salesforce Heroku: Heroku is like Platform as a Service where you can deploy and run applications.
  • Salesforce PredictionIO : PredictionIO is a Machine Learning framework that you can run on Heroku and use for tasks like intelligent recommendations based on users’ past actions. 
  • Flock: A faster way for your team to communicate.
  • Zoho Creator: Create custom apps for your unique business needs. 
  • NodeRed: NodeRed is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
  • MetamindIO: Smarter AI for developers.
  • NVDA: NVDA (NonVisual Desktop Access) is a free “screen reader” which enables blind and vision impaired people to use computers. It reads the text on the screen in a computerised voice. You can control what is read to you by moving the cursor to the relevant area of text with a mouse or the arrows on your keyboard.
  • Chaos Monkey: Open sources cloud-testing tool.
  • CUBA PlatformCUBA Platform is an open source framework for the rapid development of enterprise applications. 
Let me also share some key points that I found great during presentations.

Evolutionary Architecture ( By Neal Ford)
For many years software architecture was described as the 'parts that are hard to change later'. But then microservices showed that if architects build evolvability into the architecture, changes become easier. World is moving towards micro services based architecture and avoiding to have monolithic which becomes Big Ball of Mud later. Common concerns of a product like notifications, events, caching, security should not be directly plugged but its good idea to have all these as services which can be de-plugged or replaced easily. To get more insight about this talk, you can refer this link.

Journey of Enterprise Continuous Delivery - ( By PayPal Developer)
In this presentation, one of the developer from PayPal has demonstrated how PayPal achieved continuous delivery for a scale of 3500 dev, 2K+ applications, 15K deploys a month? And he shared comparison between old vs new deployment life cycle benchmarks which you really find verify impressive.  


2013
Today
Build Time
Hours
Minutes
Release Duration
5-6 Days
< 10 Minutes
Team involves for Release
Release Management
Release Engineering & Dev Teams
Any individual (on a single click)
Feedback time on quality
analysis
Minimum 1 day
< 30 Minutes

Due to limitation in different continuous integration tools like JenkinsGO and Bamboo, PayPal has decided to built its own in-house tool ECD (Enterprise Continuous  Delivery) using Sprint Batch, Jersey and AngularJS with existing capabilities of Jenkins. ECD has got its additional features like automated creation flow, flexibility to extended steps, parallel processing, simple user interface and YAML based definition. PayPal has got continuous and fast delivery processing after adapting Micro Services Architecture, Docker Cloud Platform and ECD (in-house continuous delivery tool). Most of the IT organizations are looking for continuous and fast delivery and thinking to avoid traditional monolithic architecture pattern. 

Designing Systems for the Remaining 15% of Population
According to WHO, there are around 1.2 billion people with disabilities in the world who face various difficulties in working with digital systems. If as IT solution provider, we can remove the barriers in accessing these systems for users with disabilities, we can expand our markets. Such fixes would not only expand markets for users with disabilities, but many of these fixes would improve usability for people without disabilities. 

How as developer we should contribute:
  • Consistent Navigation
  • Simple interface (well spaced UI controls, good gap between UI controls, simple language)
  • Device independent input (key board uses, touch screen etc.)
  • Multi-sensory (text alternative for images, contrast, caption/subtitle/transcript )
  • Problematically access (for web - HTML structure as indented, for desktop - using standard controls etc.) 

From Spaghetti to Microservices Architecture ( By Stefano Tempesta )
This session explores the agility of architecting fine-grained microservice applications that benefit continuous integration and development practices, and accelerated delivery of new functions into production, with the help of Azure Service Fabric. It also presents the Publish-Subscribe design pattern of an enterprise-level service bus built on Azure Service Bus, which guarantees message queueing and delivery, on-premises and in the cloud.


My Takeaways from GIDS
  • Evolutionary architectural practices and key notes by Neal Ford
  • Latest technology trends for continuous delivery 
  • Microservices architecture and it's challenges
  • Considering designing aspects for users who are differently abled
  • Different products/ideas (listed above) which are showcased
  • Sample project creation on Salesforce Heroku platform
  • MEAN vs LAMP architecture
  • Google's AMP (Accelerated Mobile Pages)
  • Basics of lambda architecture, batch/speed/serving layer and VoltDB

 At last I would like to conclude, overall it was enriching experience where I got to know different architectural aspects, latest technology trends and different new products in the market. 


Friday, 18 December 2015

Apache Kafka – Java Producer Example with Multibroker & Partition

In this post I will be demonstrating about how you can implement Java producer which can connect to multiple brokers and how you can produce messages to different partitions in a topic.

I also published couple of other posts about Kafka. If you are new and would like to learn Kafka from scratch, I would recommend to walk through below posts first.

Prerequisite 
I am assuming that you already have Kafka setup in your local environment. If not, you can setup Kafka in windows environment by following this link.

Setup Mutibroker and Topic with Partition
1. First you need to start Zookeeper server. To start it, execute below command.<kafka_dir> needs to be replaced with the location where you have installed kafka.
<kafka_dir>\bin\windows\zookeeper-server-start.bat ..\..\config\zookeeper.properties
2. Go to <kafka_dir>\config\server.properties file and make a copy of it at same location say ‘first- broker-server.properties’.

3. You just need to change couple of properties in first- broker-server.properties to setup first broker.
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

# The port the socket server listens on, it should be unique for each broker

port=9092

# A comma seperated list of directories under which to store log files

log.dirs=<kafka_dir>/kafka-logs/first-broker-server

# Zookeeper connection string. This this the host and port where your zookeeper server is running.

zookeeper.connect=localhost:2181
4. Go to <kafka_dir>\config\server.properties file and make another copy of it at same location say ‘second-broker-server.properties’.

5. Now change the properties in second-broker-server.properties for second broker.
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2

# The port the socket server listens on, it should be unique for each broker

port=9093

# A comma seperated list of directories under which to store log files

log.dirs=<kafka_dir>/kafka-logs/second-broker-server

# Zookeeper connection string. This this the host and port where your zookeeper server is running.

zookeeper.connect=localhost:2181
6. Now you need to start both brokers. To start broker, execute below commands for all the brokers:

    Start first broker:
<kafka_dir>\bin\windows\kafka-server-start.bat ..\..\config\first-broker-server.properties
    Start second broker:
<kafka_dir>\bin\windows\kafka-server-start.bat ..\..\config\second-broker-server.properties
7. Now create topic 'multibrokertopic' with 2 partition and 2 replication. 
<kafka_dir>\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic multibrokertopic
Java Producer Example with Multibroker And Partition
Now let's write a Java producer which will connect to two brokers. We already created a topic 'EmployeeLoginEventTopic' with 2 partitions. In this example we will see how we can send message to specific partition in a topic.


In this example program, I have tried to simulate the logic about sending employee login events to different Kafka brokers. Auto generated employeeId will be used as key and message will be sent to different partitions in format of 'EmployeeID:<employeeId>, LoginTime: <currentDate&Time>'. 


Firs of all you need to understand what properties are required to initialize producer:
Properties props = new Properties();
props.put("metadata.broker.list", "localhost:9092,localhost:9093"); //1
props.put("serializer.class", "kafka.serializer.StringEncoder"); //2
props.put("partitioner.class", "com.nvexample.kafka.partition.PartitionerExample"); //3
props.put("request.required.acks", "1"); //4
  • In first property, you need to mention the list of kafka brokers where producer will be connected.
  • In second property, serializer class for the message key needs to be mentioned. You can use default class i.e. 'kafka.serializer.StringEncoder'.
  • In third property, you need to implement 'kafka.producer.Partitioner' interface. In this implementation you can write a logic that will decide which message should be sent to which partition based on message key. 
  • In forth property, set as '1' if you want to make sure that producer will be acknowledged when message is received by brokers successfully 
Partitioner Class Implementation: 
package com.nvexample.kafka.partition;

import kafka.producer.Partitioner;
import kafka.utils.VerifiableProperties;
public class PartitionerExample implements Partitioner {

       public PartitionerExample(VerifiableProperties props) {
       }
       public int partition(Object employeeIdStr, int numOfPartitions) {
             int partition = 0;
             String stringKey = (String) employeeIdStr;
             Integer intKey = Integer.parseInt(stringKey);
             if (intKey > 0) {
                    partition = intKey % numOfPartitions;               
             }
             System.out.println("Returning partition number [" + partition + "] " +
                           "for key ["+employeeIdStr+"]");
             return partition;  
       }
}

In this implementation class, get the key 'employeeIdStr' and perform modulo operation on the number of partitions configured in a topic 'multibrokertopic'. This partitioning logic ensures that message will be sent to the same partition for same key. I mean, all login event for same employeeId will be served by same partition. 

Multibroker Producer Example:
package com.nvexample.kafka.partition;

import java.util.Date;
import java.util.Properties;
import java.util.Random;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class ProducerWithPartitionExample {
      
       private static Producer<String, String> producer;
       public final static String brokerList = "localhost:9092,localhost:9093";
       public final static String PARTITIONER_IMPLEMENTATION_CLASS
                           = "com.nvexample.kafka.partition.PartitionerExample";
       private static final String TOPIC = "EmployeeLoginEventTopic";
      
       public void initialize() {
             Properties props = new Properties();
             props.put("metadata.broker.list", brokerList);
             props.put("serializer.class", "kafka.serializer.StringEncoder");
             props.put("partitioner.class", PARTITIONER_IMPLEMENTATION_CLASS);
             props.put("request.required.acks", "1");
             ProducerConfig config = new ProducerConfig(props);
             producer = new Producer<String, String>(config);
       }
       public void publish(String key, String message) {
             KeyedMessage<String, String> data = new KeyedMessage<String, String>(
                           TOPIC, key, message);
             producer.send(data);
       }
       public void closeProducer() {
             producer.close();
       }
       public static void main(String[] args) {
             ProducerWithPartitionExample producerWithPartition
                       = new ProducerWithPartitionExample();
             // Initialize the producer with required properties
             producerWithPartition.initialize();           
             // Publish message to brokers
             Random rnd = new Random();
             for (long employeeLogInEvent = 0; employeeLogInEvent < 10;
                                                             employeeLogInEvent++) {
                    String employeeId = String.valueOf(rnd.nextInt(10));
                    String msg =   "EmployeeID:" + employeeId + ",
                                   LoginTime: " + new Date();
                    producerWithPartition.publish(employeeId, msg);
             }           
             // Close the connection between broker and producer
             producerWithPartition.closeProducer();
       }
}

In this example, we are sending employee login event as message along with employeeId as key. If you've defined a partitioner class and key is not sent along with message, Kafka assigns the message to a random partition.

Java Consumer Example 
This is the consumer program. On starting this program, consumer will connect to different brokers via zookeeper and will start consuming messages published on 'EmployeeLoginEventTopic'.

   package com.nvexample.kafka;

   import java.util.*;
   import kafka.consumer.Consumer;
   import kafka.consumer.ConsumerConfig;
   import kafka.consumer.ConsumerIterator;
   import kafka.consumer.KafkaStream;
   import kafka.javaapi.consumer.ConsumerConnector;

    public class KafkaConsumer {
       private ConsumerConnector consumerConnector = null;
       private final String topic = "EmployeeLoginEventTopic";

       public void initialize() {
             Properties props = new Properties();
             props.put("zookeeper.connect""localhost:2181");
             props.put("group.id""testgroup");
             props.put("zookeeper.session.timeout.ms""400");
             props.put("zookeeper.sync.time.ms""300");
             props.put("auto.commit.interval.ms""1000");
             ConsumerConfig conConfig = new ConsumerConfig(props);
             consumerConnector = Consumer.createJavaConsumerConnector(conConfig);
       }

       public void consume() {
             //Key = topic name, Value = No. of threads for topic
             Map<String, Integer> topicCount = new HashMap<String, Integer>();       
             topicCount.put(topicnew Integer(1));
            
             //ConsumerConnector creates the message stream for each topic
             Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreams =
                   consumerConnector.createMessageStreams(topicCount);         
            
             // Get Kafka stream for topic 'mytopic'
             List<KafkaStream<byte[], byte[]>> kStreamList =
                                                  consumerStreams.get(topic);
             // Iterate stream using ConsumerIterator
             for (final KafkaStream<byte[], byte[]> kStreams : kStreamList) {
                    ConsumerIterator<byte[], byte[]> consumerIte = kStreams.iterator();
                   
                    while (consumerIte.hasNext())
                           System.out.println("Message consumed from topic
                                         [" + topic + "] : "       +
                                           new String(consumerIte.next().message()));              
             }
             //Shutdown the consumer connector
             if (consumerConnector != null)   consumerConnector.shutdown();          
       }

       public static void main(String[] args) throws InterruptedException {
             KafkaConsumer kafkaConsumer = new KafkaConsumer();
             // Configure Kafka consumer
             kafkaConsumer.initialize();
             // Start consumption
             kafkaConsumer.consume();
       }
   }



Output of Producer and Consumer Program
Execute consumer program first and than producer program. You will get following output. Notice in producer program console same partition number is returned for same key. This is ensuring that, the message for same employee id will be sent to same partition. 

ProducerWithPartitionExample.java Program Console Output:



KafkaConsumer.java Program Console Output:


Hope this post helped you learning Java kafka producer with Multibroker & Partition flavor. It will be great if you leave your feedback on this post.