So, GO for the most popular open source software that has large community of support behind them, so you have somewhere to go to if you need advice.
Monday, March 12, 2018
So, GO for the most popular open source software that has large community of support behind them, so you have somewhere to go to if you need advice.
Monday, February 26, 2018
Sunday, February 18, 2018
By 2020, 50%+ of all Enterprise Data will be managed Autonomously and 80%+ of Application and Infrastructure Operations will be resolved Autonomously. Well this is quite possible with AI becoming reality and cementing it's foot in the industry. To move towards this direction, first of all we need to automate all the DBA tasks, and then later on implement Machine Learning so that database will take it's own decision based on what is going on in database without any / minimal involvement of DBA's. To automate all of those tasks, we need to develop framework which I call it as DBAaaS, define roles based access, and develop a DBAaaS Mobile App / Portal!
Here is quick Prototype of "DBAaaS 1.0" Mobile App developed by me in less than 2 hours using Rapid Prototype Development tool!
URL : https://gonative.io/share/rzydnx
User name : testing
Password : test
Monday, February 5, 2018
In this blog, I will cover Pivotal Container Service (PKS), Kubernetes (K8s) , Dockers and Containers. Before we touch PKS, lets understand what is Dockers and Containers !
Once upon a time, there was Physical Server era, where in we used to have a very large server, install OS and install various applications on top of it! Then the Hypervisor Architecture was born, where in on the same server, you just need to install Hypervior which enables you to create multiple Virtual Machines and in each VM you can install OS & required App. Now there is a new Container era!
There are advantages and disadvantages of running containers directly on server. However most of the companies are taking advantages of Hpyervisor technology as well as Container technology, to build the next generation platforms.
Now lets look as Kubernetes, generally called as K8s. It's Orchestration tool for containers.
K8s cluster consists of 2 major parts, Master and Nodes. Nodes are some times called as Minions as well.
Master has 4 major parts.
1) kube-apiserver : Front-end to the control plane, exposes the API (REST) and Consumes JSON
2) Cluster store: Persistent storage for Cluster state and config, it uses etcd, the “source of truth” for the cluster and have a backup plan for it!
3) kube-controller-manager: Controller of controllers, Watches for changes & Helps maintain desired state
4) kube-scheduler : Watches apiserver for new pods, assigns work to nodes
Nodes has 3 major parts and runs Pod(s) inside them.
1) Kubelet : The main Kubernetes agent, registers node with cluster, watches apiserver, instantiates pods, reports back to master, exposes endpoint on :10255
2) Container Engine: Does container management such as Pulling images, Starting/stopping containers. Generally Docker, it can be rkt as well.
3) kube-proxy: Kubernetes networking, Pod IP addresses. All containers in a pod share a single IP. Load balances across all pods in a service
You can run multiple Pods in one node, and it is not typically recommended to run a large number of containers in a pod, it is a best practice to run a primary container along with additional containers to provide services to the primary container in a given pod.
And finally lets see, PKS !!
PKS gives IT teams the flexibility to deploy and consume Kubernetes on-premises with vSphere, or in the public cloud. PKS 1.0 is currently supports vSphere and GCE. PKS leverages a specific BOSH release for K8s which has specific requirements.
1) PKS Controller : The control plane where you create, operate, scale, and Kubernetes clusters from the command line and API.
2) Built with open-source Kubernetes : Constant compatibility with GKE ensures access to the latest stable K8s releases.
3) BOSH : BOSH provides a reliable and consistent operational experience. For your Private cloud running on vSphere 6.5 or GCE Public Cloud.
4) Harbor : Harbor is your container repository
5) GCP Service Broker : The GCP Service Broker allows apps to transparently access Google Cloud APIs, from anywhere. Easily move workloads to/from Google Container Engine (GKE).
6) NSX-T : Network management and security out-of-the-box with VMware NSX-T. Multi-cloud, multi-hypervisor.
Tuesday, November 28, 2017
In this blog I will provide overview, available options, how to build secure eCommerce website, advantages & challenges of Serverless and Codeless Cloud Native Applications, along with a live example.
Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity such as VM / Containers etc. It is a form of utility computing. Serverless computing still requires servers. The name "serverless computing" is used because the server management and capacity planning decisions are completely hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned services at all. Key benefits of a serverless architecture include automatic scale up and down in response to current load and the associated cost model that charges only for milliseconds of compute time used when running.
There are several options available for Serverless CNA's. Most popular and noteable are
Openwhisk - OpenWhisk is a serverless, open source cloud platform that executes functions (called actions) in response to events (called triggers) without developer concern for managing the lifecycle or operations of the containers that execute the code.
AWS Lamda AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
Build Secure eCommerce website for free!
So the question is, can we build secure e-Commerce web site without any Developer, QA, Portal Admin, DBA, sys admin and Servers? Yes, this is possible! Here is what all you need and what all you can use to build your eCommerce site for almost free!
Website : Create your static website in Jakyll and deploy in Github for free !
Access Management : Plugin cloud based identity management solutions such as userapp.io / firebase etc
Cloud based Database : Plugin free firebase database at the back end to store your data
eCommerce functions : Plugin SnipCart into your website for shopping cart and payment gateway functionality
Digital Delivery: Plugin in SendOwl and SendGrid for digital goods delivery and marketing and you are done !
- No need to own anything - run for almost free till your business pickups up
- Work on your product features rather than building / managing your website
- No dependency on one vendor / Multiple options for similar services in market
- Multiple plugin's and inseparability
- Gradually you may have to start paying as and when you exhaust free limits and start looking at cost optimization using other means.
And here is one example, I built mobile app/site for my son, in 6 hours without Developer / QA / PM / IDM / Portal admin / DBA / Sys admin and Servers! Check this out !
URL : https://goo.gl/wb2bNK
User name : testing
Password : test
Enjoy, and welcome to the future of Cloud !
Friday, October 27, 2017
PCF in Nutshell
PCF is enterprise grade Cloud Foundry which is Open Source software. As described in following diagram, traditionally we used to manage the entire IT stack from top to bottom, as we evolve into Private cloud, we started offering IaaS and PaaS services. So PCF is essentially PaaS offering which is enterprise grade and runs on Any IaaS, well most of the leading IaaS !
Lets look at very high level what's in it. In following diagram, I have tried to explain PCF from 2 angles, Infrastructure and Operations.
The biggest advantage of PCF is Rapid Application deployment and scaling, we just need 2 command to deploy and scale applications in PCF!
PCF Laptop Lab
Best practices for Enterprise PCF deployments
- Size your PFC using PCF sizing tool
- Store all the passwords in keypass
- Make sure that you setup at least 3 Availability Zones
- Plan and design your Org, Spaces, Apps and Security of Applications well in advance before you start the setup
... Will keep updating this section.
Wednesday, October 25, 2017
Once the plug-in is installed, please run 3 reports, which will provide very interesting dashboard and detailed reports such as
- Identify mis-configured clusters, hosts and VMs.
- Identify performance problems and their root causes.
- Reclaim underutilized CPU, memory and disk space.
Check out sample report videos here.
Thursday, October 12, 2017
Before going to best practices, lets understand what is Kafka. Kafka is publish-subscribe messaging rethought as a distributed commit log and is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.
Here is the high level conceptual diagram of Kafka, where in you can see Kafka cluster of size 4 (4 number of brokers) managed by Apache Zookeeper which is serving multiple of Producers and Consumers. Messages are sent to Topics. Each topic can have multiple partitions for scaling. For fault-tolerance we have to use replication factor, which ensures that messages are written in multiple partitions.
To setup Kafka Laptop Lab, please install VMware Workstation, Create a Ubuntu VM, Download and unzip Kafka
tar -xvf kafka_2.11-0.11.0.1.tgz
-- Set environment parameters
--Add following 2 lines at the end of .bashrc file, save and close the file.
exit and open new terminal
-- Install JDK
sudo apt-get purge openjdk-\*
sudo mkdir -p /usr/local/java
sudo apt-get install default-jre
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.17.04.3-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
--Start Kafka server
--Start Topic start and list
kafka-topics.sh --describe --zookeeper localhost:2181
--Start Produce console
kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
-- Start Consumer console
kafka-console-consumer.sh --zookeeper localhost:2181 --topic mytopic --from-beginning
In this screenshot, you can see, I have started Zookeeper and Kafka in top 2 terminals, in middle terminal I have created topic, and bottom 2 terminals has Producer and Consumer Console. You can see the same messages in producer and consumers.
Best Practices for Enterprise implementation
Sharing best practices for Enterprise level Kafka Implementation
- Make sure that Zookeeper is on different server than Kafka Brokers.
- There should be minimum 3 to 5 zookeeper nodes in one zookeeper cluster
- Make sure that you are using latest java 1.8 with G1 collector
- There should be minimum 4-5 Kafka brokers in Kafka cluster
- Make sure that there are sufficient / optimum partitions for each topic, higher the number of partitions more parallel consumers can be added , thus resulting in a higher throughput. More partitions can increase the latency.
- There should be minimum 2 replication factor for each topic for fault-tolerance, again more number of replication factors will have impact on performance
- Make sure that you install and configure monitoring tools such as Kafka Manager
- If possible implement Kafka MirrorMaker for replication across data-centers for Disaster Recovery purpose
- For Delivery Guarantees set appropriate value for Broker Acknowledgement (“acks”)
- For exceptions / Broker Responds with error set proper values of Number of retries, retry.backoff.ms and max.in.flight.request.per.connection
I will keep appending this section on regular basis.
Tuesday, October 10, 2017
In this blog, I am going to describe overall vRA implementation project plan which can be used as sample for any vRA implementation.
We need variety of skills for this implementation such as, Cloud admin, OS Admin, Process expert, Monitoring tools team, Project Manager, Technical Manager etc and last but not least, Customer Management!
Timelines mentioned in this sample project plan are indicative and may vary depending complexity. For example, creating handful of templates and 20 odd blueprints without any Application or Database may take less time, however when we are considering provision of App and DB using vRA, we need to consider more time, including testing time.
Information gathering stage is very important and make sure that customer understands advantages, disadvantages, product feature, limitations which needs to be considered while designing the vRA solution.
Saturday, November 26, 2016
Wednesday, November 9, 2016
Please refer following white paper for more details.
Sunday, September 18, 2016
1) CIO Bottom line award (2013) for automated health check of Load test environment. Before we start any load test, we need to perform health check on the entire environment, restart services in sequence on all the 150 servers. This used to be manual and labor intensive activity, and we need to follow the sequence, needs too many handshakes between various admin teams. We did automated all the health checks and also automated the restart of all services including Databases, Middleware, Portals, eBS , IDM etc and also inbuilt the sequence. This resulted in a saving of around $110K/year.
2) Question of the day (2009): We were running 24X7 monitoring operation, and we hardly used to get time for in class training to my team. So I came up with a innovative idea, why can't they learn every day a small portion of the technology. For them understanding of OEM was a must, so I came up with series of tasks and questions and sequenced them, and also automated them to be sent to team members on daily basis so that that they can try these activities / try to answer these practical questions during their spare time. For this innovative idea of training, I was also awarded.
3) Remote Monitoring system (2005): Remote Monitoring Service is Systems management solution which was designed by me around Enterprise Manager Grid Control technology, that means the majority of this solution was, in and around 10g Grid Control. It proactively monitors all components of IT infrastructure, like database, listeners, application servers, storage, you name it.. CPU, Memory, load balancer etc.. And now a days using plug-in even third party s/w like IBM, Microsoft databases can be monitored. It immediately sends alerts and notifications to relevant and registered mail id’s, like DBA’s, Unix Administrators, Helpdesk or some times to managers as well for very critical errors with short message. In-built intelligence through “Fixit Jobs”, say like, if a database went down, we can proactively give instructions to restart the database. Also we had write scripts to fix regular DBA issues, for example if a tablespace is out of space, it will automatically add a data file to that tablespce and inform the DBA, regarding the action it has taken. Customizable threshold and critical levels: different customer’s will be having different standards for warning and critical levels, for example one customer may say 85 is our warning and 95% is critical limit, but these limits may be different for other customer, and this can be achieved by setting different limits for different customer, so its customizable as per customer needs. And finally, it facilitates conformity to Service Line Agreements. This became entry level service and started generating huge revenue in the form of main services. For this innovative idea, I was awarded.
Sunday, October 11, 2015
The theme for this edition was - “Leadership is all about a Brand of Trust”
About SAP ManaGeRight: At SAP, manager development activities are run by a team of managers called ManaGeRight. ManaGeRight has been organizing several programs over the past few years and we are in fact the first company to organize a Managers Day, bringing together all of our 400+ managers under one umbrella for a day.
The New Age Manager-2015- A Novel Idea by SAP: Leveraging on our expertise and learning's from various flagship programs, we are now taking the next step to organize ‘The New Age Manager ‘ as a platform for the best managers across the industry to come together and learn and share from each other. The objective of the conclave was to create a forum to enable the best managers in the industry:
- To hear expert opinions and benefit from shared learning.
- To network with the peers.
- To collaborate on best practices.
This Manager Conclave was first of it's kind arranged by SAP India
I was request to participate in the panel discussion on the topic - “New Age Performance Management – Are Bell Curves Needed?”.
Friday, July 31, 2015
Friday, March 20, 2015
I strongly believe, "When you stop learning, you stop growing.."
I have started to learn SAP now!
In this blog, I have listed few important SAP Transaction Codes, a SAP BASIS Admin must know.
Tuesday, February 3, 2015
One thing I want to stress is that, you still need to manage Classic Data-center, however there are several things you need to understand, build and manage on top of Classic DC to make it Virtual DC.
Following diagram gives overall picture, as what you have to manage in IT Operations in case of Classic DC and Virtual DC, apart from the various apps and websites.
Wednesday, January 28, 2015
Why Cloud Computing?
The IT challenges listed below have made organizations think about the Cloud Computing model to provide better service to their customers
- Globalization: IT must meet the business needs to serve customers world-wide, round the clock - 24x7x365.
- Aging Data Centers: Migration, upgrading technology to replace old technology.
- Storage Growth: Explosion of storage consumption and usage.
- Application Explosion: New applications need to be deployed and their usage may scale rapidly, The current data center infrastructures are not planned to accommodate for such rapid growth.
- Cost of ownership: Due to increasing business demand, the cost of buying new equipment's, power, cooling, support, licenses, etc., increases the Total Cost of Ownership(TCO.)
- Acquisitions: When companies are acquired, the IT infrastructures of the acquired company and the acquiring company are often different. These differences in the IT infrastructures demand significant effort to make them inter-operable.
- On-Demand Self-Service
- Resource Pooling
- Rapid Elasticity
- Measured Service
- Broad Network Access
- Infrastructure as a service
- Platform as a service
- Software as a service
- Public Cloud: Infrastructure Shared across multiple end users which may include companies
- Private Cloud : Exclusive for one company, it can be on-premise / exclusively hosted at cloud service provider
- Hybrid Cloud : Combination of Public and Private cloud
- Community Cloud : Set of similar types of customer, comes together and share infrastructure, example multiple universities contribute and use one cloud infrastructure.
- Security and Regulations
- Quality of service
- Network Latency
- Long term cost
- Service Warranty and service cost
- Huge number of s/w to manage
- No standard cloud access interface
Tuesday, June 19, 2012
Wednesday, June 13, 2012
Changing password of SYS, SYSTEM, DBSNMP
- Do Not Allow Shared Accounts
- Do Not Use Generic Passwords
- Treat All Non-Production Instances With The Security As Production
- Restrict Network Access - Set Password on Database Listener
- Minimize Passwords Contained In OS Files
- Secure Default Database Accounts
- Be Proactive!
- Apply all prior, and plan in advance to apply any new Oracle Security Patches
- Limit Access To Forms Allowing SQL Entry
- Stop isqlplus process on server side (if started)
- Restrict Network Access - Limit Direct Access To The Database
- Change the passwords at least once in 3 months
Note: As per metalink id 1158212.1, after E-business version 11.5.10 this request generally does not need to be run.
Tuesday, June 12, 2012
Initialization & listener parameter
AWR, Alert.log, listener log, OS watcher, RDA
Invalid Objects, Indexes and fragmentation
Tablespaces, Data files, log files and control files
Custom objects in SYSTEM tablespace & SYSTEM tablespace as default tablespace
Stats job schedule
Workload balancing/distribution in clustered environments
Database Patch level, de-support, and patching strategy (CPU, one off)
Server disk space for DB growth, Archive log, backup destination
Server level pre-req’s, errors, warnings & background jobs
Database Backup and Recovery
Database Monitoring and alerting system
Database Disaster Recovery solution
Debugging latch contention, hangs, crashes & locking issues
Oracle Applications Infrastructure Review (eBS) Points / Areas
Database review as per earlier slide
Application Technical Architecture
Application Backup and Recovery
Application Security, Audit, and security profile options
Standard Manager programs and it’s parameters
Application Monitoring and alerting system
Application Disaster Recovery solution
Application Patch level, de-support, and patching strategy
Network (Latency and Bandwidth)
JDBC connection parameters
Forms & Reports server
Standard Concurrent Manager
Recommendation on best practices for routine administrative tasks etc.
Monday, April 16, 2012
Hi to all,
In this blog I will discuss some of the main points which an Apps dba should know about Install, Upgrade and admin scripts. So lets begin with Installer
Main points about Installer:
1) config.txt is now configSID.txt , for adding node, you can use configSID.txt or get details from database directly using host.domain:port:sid format
2) Install types: Standard and Express
3) Shared APPL_TOP, COMN_TOP and tech stack as well, but not for Windows
4) Easy load balancing of CP and Web communications
5) Technology Stack Components : Oracle 10g R2 Database home, Oracle Developer 10i (forms, reports) and Oracle 10g Application Server 10.1.2 (http server)
6) Java Development Kit (JDK) 5.0 is automatically installed by Rapid Install
7) Disk Space : Applications node 28 GB , Fresh DB 45 GB, Vision DB 133 GB, Stage for fresh install 33 GB, TEMP 500 Mb
8) Create Stage : CD's are in DVD Format, and run adautostg.pl to create dir structure, which requires perl 5.0053 in PATH, and creates subdirectories startCD, oraApps, oraDB, oraAS, and oraAppDB under stage12
9) Want to install on virtual hostname, use -servername as command line parameter with rapidwiz. There are 2 more command line parameters, -restart to restart any failed install, and -techstack to install only technology stack.
10) Incase of multiuser installation, start installer using root account
11) For additional language, you must use OAM (oracle applications manager)
12) There is new concept of INST_TOP which mainly stores instance specific files including runtime files, log files and configuration files
13) In R12 there are Services concept instead of nodes (forms/web/concurrent). Following is the list of services in R12 :
* Root Service Group which supports • Oracle Process Manager (OPMN)
* Web Entry Point Services which supports • HTTP Server
* Web Application Services which supports • OACORE OC4J • Forms OC4J • OAFM OC4J
Batch Processing Services which supports • Applications TNS Listener • Concurrent Managers • Fulfillment Server
Other Service Group which supports • Oracle Forms Services • Oracle MWA Service
* : Thses services must be installed on same / one machine (which is nothing but Web node, according to 11i )
14) Regardless of type of services confugured on perticular server, all files (forms, reports, jsp) are stored in APPL_TOP (Unified), basically to have pure 3 tier arch
15) Installer gives option to configure OCM (Oracle configuration manager) where in OCM keeps track of key Oracle and OS stats. This collected data is sent to oracle support via https for better understanding of issues and quick resulations to any issues reported
Main points about Upgrade:
1) You can only upgrade to R12 from 11i, if you are at older version (like 10.7 or 11.0.3 etc), you must upgrade first to 11i, and then upgrade to R12
2) High level R12 Upgrade process :
• Run rapid installer first time to layout new file structure and tech stack
• Migrate or Upgrade database to 10g R2
• Run Autopatch to run database driver to bring DB to R12 level
• Run rapid installer second time to configure and start services
adautocfg.sh - run autoconfig
adstpall.sh - stop all services
adstrtal.sh - start all services
adapcctl.sh - start/stop/status Apache only
adformsctl.sh - start/stop/status OC4J Forms
adformsrvctl.sh - start/stop/status Forms server in socket mode
adoacorectl.sh - start/stop/status OC4J oacore
adoafmctl.sh - start/stpp/status OC4J oafm
adopmnctl.sh - start/stop/status opmn
adalnctl.sh - start/stop RPC listeners (FNDFS/FNDSM)
adcmctl.sh - start/stop Concurrent Manager
gsmstart.sh - start/stop FNDSM
jtffmctl.sh - start/stop Fulfillment Server
adpreclone.pl - Cloning preparation script
adexecsql.pl - Execute sql scripts that update the profiles in an AutoConfig run
java.sh - Call java executable with additional args, (used by opmn, Conc. Mgr)
Note: To understand this page, you should have prior knowledge or background of APPS 11i