Wednesday, November 19, 2008

RankLog in FAST ESP

The SBC (Search Business Center) provides a simple way of defining how document summaries are rendered, but only allows for the
fields returned to be used(When the rank log turned on).

To Enable RankLog:

1. Go to Search Profile Settings > Query Handling in SBC.
2. Add the static query parameter ranklog=true and save.
3. Publish the Search Profile by going into Publishing and click on Publish
Search Profile.

Thursday, October 23, 2008

FAST ESP - Enable GEO Search

With Geo Search you can control the sorting based on geographical distance from a given start position/geographical location.

FAST ESP supports geographical coordinates associated with documents, and lets you sort and filter results based on radius or a rectangular geographical area. Using the filter option you can use regular sorting or ranking. Sorting based on distance can not be combined with regular ranking.

To search with Geo sort/filter :

1. GEO search must be enabled in the back-end.
2. GEO data must be fed into FAST ESP.


To Enable Geo in Fast SFE :

1. Open $FASTSEARCH/adminserver/webapps/sfe/WEB-
INF/classes/com/fastsearch/espimpl/sfeapi/searchservice/SearchServiceImpl.properties
2. Add com.fastsearch.espimpl.sfeapi.searchservice.search.geo.LatLonGeoSearchImpl to custom_search_inputs=
3. Add com.fastsearch.espimpl.sfeapi.searchservice.result.geo.GeoGraphImpl to custom_result_aspects=

4. Restart the ESP using nctrl restart command.

Now you can see the Geo features in SFE under the Advanced search Tab.

Sunday, October 12, 2008

ESP : Error 1005

Find the below log file for this error.It will accours when the QPS exceeds the limit.

Error :

Error 1005 Query Term Refuse

Solution :

Check the QPS license limitations via Admin GUI.If it's exceeds the limit try to get a new license and restart the QR Server.

ESP : Error 28 No space left on device

Find the below log file for this error.It will accours when you did a mistake in Config file.

$FASTSEARCH/var/log/configserver.scrap

It's one of the FATAL error.The main causes for this error is,there is no space available to store the config file in that partition.

Error :

Error saving main configuration
file: IOError: [Errno 28] No space left on device

Solution :

Clear some space to allow the configserver to save configuration.

Note :
Stopping the configserver during these conditions may cause information to be lost.

ESP : Error Code 226

Find the below log file for this error.It will accours when you did a mistake in Config file.

$FASTSEARCH/var/log/configserver.scrap

It's one of the FATAL error.The main causes for this error is,some program/Application using the port what ESP use.

Error :

Failed to start ConfigServer:
error: (226, 'Address already in use')

Solution :

Start the configserver on another port or shut down the program using the one you are trying to use.(Edit the Port element in config file)

ESP : FATAL Error 128

Find the below log file for this error.It will accours when you did a mistake in Config file.

$FASTSEARCH/var/log/configserver.scrap

It's one of the FATAL error.The main causes for this error is,FAST ESP could not able to do the character encoding during the load of Cofig file.

Error :

Error loading config file: UnicodeError: ASCII encoding
error: ordinal not in range (128)

Solution :

Edit the configuration file and remove those characters.

ESP : Indexing

The purpose of storing an index is to optimize speed and performance in finding relevant documents for a search query. Without an index, the search engine would scan every document in the corpus, which would require considerable time and computing power. For example, while an index of 10,000 documents can be queried within milliseconds, a sequential scan of every word in 10,000 large documents could take hours. The additional computer storage required to store the index, as well as the considerable increase in the time required for an update to take place, are traded off for the time saved during information retrieval.

Index Design Factors
Major factors in designing a search engine's architecture include:

Merge factors

How data enters the index, or how words or subject features are added to the index during text corpus traversal, and whether multiple indexers can work asynchronously. The indexer must first check whether it is updating old content or adding new content. Traversal typically correlates to the data collection policy. Search engine index merging is similar in concept to the SQL Merge command and other merge algorithms.

Storage techniques
How to store the index data, that is, whether information should be data compressed or filtered.

Index size
How much computer storage is required to support the index.

Lookup speed
How quickly a word can be found in the inverted index. The speed of finding an entry in a data structure, compared with how quickly it can be updated or removed, is a central focus of computer science.

Maintenance
How the index is maintained over time.

Fault tolerance
How important it is for the service to be reliable. Issues include dealing with index corruption, determining whether bad data can be treated in isolation, dealing with bad hardware, partitioning, and schemes such as hash-based or composite partitioning, as well as replication.

Index Data Structures
Search engine architectures vary in the way indexing is performed and in methods of index storage to meet the various design factors. Types of indices include:

Suffix tree

Figuratively structured like a tree, supports linear time lookup. Built by storing the suffixes of words. The suffix tree is a type of trie. Tries support extendable hashing, which is important for search engine indexing.[8] Used for searching for patterns in DNA sequences and clustering. A major drawback is that the storage of a word in the tree may require more storage than storing the word itself. An alternate representation is a suffix array, which is considered to require less virtual memory and supports data compression such as the BWT algorithm.

Tree
An ordered tree data structure that is used to store an associative array where the keys are strings. Regarded as faster than a hash table but less space-efficient.

Inverted index
Stores a list of occurrences of each atomic search criterion[10], typically in the form of a hash table or binary tree.

Citation index
Stores citations or hyperlinks between documents to support citation analysis, a subject of Bibliometrics.

Ngram index
Stores sequences of length of data to support other types of retrieval or text mining.

Term document matrix
Used in latent semantic analysis, stores the occurrences of words in documents in a two-dimensional sparse matrix.

Friday, October 10, 2008

ESP - Partial Update

We can do the partial updates via Content API (feeder. updateDocument ()) when Indexer doing the Incremental Indexing.

Basically during the document processing stage, ESP will decide to do the partial update or full update (Add Document).For that we need to enable the Partial Update option in our custom processor/Pipeline.


1. $FASTHOME\etc\processors\ProcessorServer.xml

2. Add the Partial Update element in our custom processors.

3. Change the XMLMapper.xml to enable the Partial Update.



We can do the same via web Analyzer tool also .

Partial Update considerations :

The update methods provides a means for partial document updates and have certain limitations.
As a general rule these methods should only be used to update metadata or numeric elements that does not require any document processing. datetime elements can also be updated.
This means that if you need to update the actual content of an HTML page, PDF document or XML file the add methods must be used.
It is possible to implement custom document processing that supports partial update

Monday, September 15, 2008

FAST : Integrate the File Traverser

A user friendly interface to the File Traverser can be intgrated into the administrator interface.The connector controller module enables the File Traverser to be integrated into the FAST ESP administrator interface.

To integrate the connector controller, complete the following procedure on each node that you want to make available for file traversing via the administrator interface.

Note :

If it seems from the logs that the file traverser does not start, Check the
connectorcontroller scrap file $FASTSEARCH/var/log/connectorcontroller/connectorcontroller.scrap and the
filetraverser scrap file in $FASTSEARCH/var/log/FileTraverser_.scrap

1. Add the following entries to the $FASTSEARCH/etc/NodeConf.xml file.
a) Add the following to the element:



b) Add the following the list of processes:



2. Execute command: $FASTSEARCH/bin/nctrl reloadcfg
3. Execute command: $FASTSEARCH/bin/nctrl start connectorcontroller

The File Traverser should now appear as a Data Source in the administrator interface.

4. Test a normal situation scenario
a) Add collections with the file traverser as a data source.
b) Start and stop the data source.
c) Delete the collection.

FAST : Disable User Authentication

We can disable user authentication in the FAST ESP administrator interface by completing the steps in this procedure.

1. Open $FASTHOME/etc/guiConfig.php
2. Set the following parameter:
$ADMINGUI_PHP_AUTH_DISABLED=True;

Thursday, September 4, 2008

FAST ESP ; MARSHAL_MessageSizeExceedLimitOnClient

Question:
---------------

Error: MARSHAL_MessageSizeExceedLimitOnClient What can be the reason for it?

Answer:
--------------

Error: MARSHAL_MessageSizeExceedLimitOnClient usually happens by trying to extract
records or attachments beyond a specified limit. Make sure that you have the
OMNIORB_CONFIG environment variable set to point to the omniorb.cfg file. In this file you can look for the property giopMaxMsgSize = 209715200 # 200 MBytes.

The default level I believe is 200MB.

The hanging will happen when you have this misconfigured.

FAST ESP : Check the DocCount for Collection

Question

Is there a way to determine if the index for a collection is completely
empty and deleted.i.e. after adminclient -d AND deleting the collection in the GUI.
How can we know that everything is really gone.

Answer:

On large systems deleting all documents in a collection may take quite
some time. You should verify that all documents in the collection are gone
by issuing doccount-commands to all columns by using the rtsinfo tool.

Usage:

rtsinfo nameserver nameserverport clustername columnid rowid


For a system with three columns, one row and standard port range, run
these three commands on the admin node.

rtsinfo adminhost 16099 webcluster 0 0 doccount collectionname
rtsinfo adminhost 16099 webcluster 1 0 doccount collectionname
rtsinfo adminhost 16099 webcluster 2 0 doccount collectionname
(replace adminhost and collectionname with the entries valid for your system)
Typical output from each of these commands:

There are 1750 docs in the collection collectionname.
SUCCESS.

When "0 docs" is reported from all columns, the collection is clean.

FAST ESP 4.3.x : Delete Indexed Documents

QUESTION:

I have several collections that I would like to re-crawl from scratch, but I don't want to have to reconfigure all the settings for each. In FDS 3.x, is there a way to delete all crawled data without losing the collection configurations?


ANSWER:

Here are the steps required for deleting all crawled data and the index from a 3.2 installation without removing the crawler configuration:

IMPORTANT - This will cause complete loss of all indexed documents,
therefore, search will be unavailable for some time until the crawler has begun re-populating the collections. We strongly recommend initiating this procedure during a system maintenance window.

1. Stop FDS from the Admin GUI or using the command 'net stop FASTDSService'

2. Ensure all FAST processes have had time to stop completely and manually kill any remaining processes with the Task Manager

3. Delete all files and directories within the %FASTSEARCH%\data\directory, EXCEPT %FASTSEARCH%\data\crawler\run\domainspec (this file contains the crawler collection configurations)

4. Start FDS with the command 'net start FASTDSService'

5. Once all FDS processes are active in the System Management page, open up the collection configuration for each collection, verify that the settings are still correct and then click 'submit' on each to refresh the collection information.


NOTES:


-You may see temporary OSErrors for the PostProcessor trying to locate the collections directory (which will be in the process of being rebuilt).

- You may also see temporary errors from the QRServer, such as 'All partitions down', because the index is still being rebuilt.

- Some collections may start immediately crawling, while others may be idle for a short time before they start crawling.

FAST ESP : Term Descriptions

Question :

**********

Do you have a quick reference sheet for the terms associated with indexing and related concepts such as: Search Clusters, Search Columns, Search Rows

ANSWER
=======

This reference is found in the FAST Data Search 3.2 Configuration Guide.

A Data Search installation may consist of a number of Search Engines. A Search Engine provides indexing and search features towards a given partition of the total searchable content. The Search Engines are grouped in Search Clusters, Search Columns and Search Rows.

A Search Cluster is a group of Search Engines that share the same Index Profile (schema). This means that the collections assigned to this cluster may be mapped to the same index layout. One Search Cluster may for instance contain web pages and documents, while another Search Cluster may contain items from a content database.

The cluster may include multiple Search Rows (query rate scaling) and Search Columns (data volume scaling) that share the same index configuration. The matrix in the figure above indicates this.

Each Search Cluster will have a number of Collections assigned to it,which provides a logical grouping of content. Note that the collection concept represents a logical grouping of the content within the Search Cluster (one collection resides inside one
Search Cluster, but may be spread across multiple Search Columns).

The Document Processing is performed prior to indexing. Within the document processing each document is represented by a set of Elements, which can be further processed and later mapped to searchable Fields via the Index Profile. Elements and Fields may represent content parts and attributes related to the document (body,
title, heading, URI, author, category).

The Index Profile defines the layout/schema of the searchable index, and defines how fields are to be treated by query and result processing. Each Search Cluster has an associated Index Profile.

The Index Profile also includes one or more Result Views that defines alternative ways for a query front-end to view the index with respect to queries.

FAST ESP : Duplicate items when searching

Question :
**********

I'm getting a lot of identical hits for the same item. What have I
done wrong?

ANSWER:
*******

There are several possible causes for this, but the most common cause
is that the document ID is not present in the document summary.
Unless you've explicitly disabled incremental indexing, the first
entry in the first document summary class MUST be the document ID. If
not, incremental indexing will not work, and you will get lots of
duplicate items.

FAST ESP - Error code 1102: "Could not open channel to server."

Error code 1102: "Could not open channel to server." in the var/log/qrserver.scrap file.

Description:
--------------

1102 is the error code for "Could not open channel to server.", means that the topfdispatch process the qrserver has been configured to use, is not listening to the transport port.

In such error cases the topfdispatch is most likely down so that all queries issued in the time-period will receive the 1102 error code or in addition you may see the transition error codes listed below.

Transition error may appear when fdispatch goes down and the qrserver loses the
connection.

Typical transition error codes are:

1107: "Connection failed while waiting for query result."
1110: "Connection failed while waiting for document summaries."


Solutions:
---------
Restart of the topfdispatchers which can be done from the Admin GUI --> System
Management.

Such a down/up transition can be caused by a slow system (i.e a ping time out).

You could try to increase the "pingioctimeout" option by updating the file
$FASTSEARCH/etc/config_data/QRServer/webcluster/etc/qrserver/qrserverrc for
instance with:

pingioctimeout = 30000 # 30 seconds

and restarting the qrserver (nctrl stop/start qrserver) process on all nodes that are running qrserver.

To check if a server is running "qrserver" or any process, use the command
"nctrl sysstatus".

FAST ESP : 'ConfigServerExceptions.CollectionError'

Question:
==========
I have a collection I am trying to delete through the Admin gui. When clicking on the trashcan it says the collection was fully deleted and gives me a success message. But when I go in and try to create a new collection with the same name I get the following:

FaultCode: 1.

Reason 'ConfigServerExceptions.CollectionError: The Collection
aehcatalog1 already exists (in d:\e\win2ksp3-i686\datasearch-3.1.0.10-
filter-flexlm-000
\common\datasearch\src\configserver\ConfigServerConfig.py:CreateCollec
tion line 794)'

What am I doing wrong?

Solution:
===========
The collection isn't actually deleted when initially performing the action of deleting. When you delete the collection is "scheduled for deletion", you see all the documents that are associated with the collection are blacklisted in the search index and will be removed as the deletes are pushed though the system (this happens automatically)

However if you try to add a collection back with the same name, you will not be able to because it wasn't fully deleted. In reality you will be able to add it back again, however it might take a few hours before the system is ready to accept a collection with the same name again.

A suggestion is to create a collection with a different name.If you want to add the collection back, you'll have to wait for the system to digest your request to delete it. That will allow you at least work with the collection and pipeline, until you have it set exactly the way you want.Then you can add the collection back as the
original name.

FAST ESP : Delete the Indexed Document

Sometimes Documents remain in the index even after We have deleted the collection.We can delete these remaining documents from the index without deleting documents from other collections.

Please do the following to delete the Indexed Documents from the Collection :

1. Run %FASTSEARCH%\bin\rtsinfo
>
allids.txt

2. Run sed.exe “s/ -.*//g” < allids.txt > killthis.txt

3. Run %FASTSEARCH%\bin\rtsadmin
rdocs killthis.txt


Notes : After FAST 4.0.x Only Support

Friday, August 29, 2008

Estimation Techniques for Software Projects

Software projects are typically controlled by four major variables; time, requirements, resources (people, infrastructure/materials, and money), and risks. Unexpected changes in any of these variables will have an impact on a project. Hence, making good estimates of time and resources required for a project is crucial. Underestimating project needs can cause major problems because there may not be enough time, money, infrastructure/materials, or people to complete the project. Overestimating needs can be very expensive for the organization because a decision may be made to defer the project because it is too expensive or the project is approved but other projects are "starved" because there is less to go around.

In my experience, making estimates of time and resources required for a project is usually a challenge for most project teams and project managers. It could be because they do not have experience doing estimates, they are unfamiliar with the technology being used or the business domain, requirements are unclear, there are dependencies on work being done by others, and so on. These can result in a situation akin to analysis paralysis as the team delays providing any estimates while they try to get a good handle on the requirements, dependencies, and issues. Alternatively, we will produce estimates that are usually highly optimistic as we have ignored items that need to be dealt with. How does one handle situations such as these?

Useful Estimation Techniques :

Before we begin, we need to understand what types of estimates we can provide. Estimates can be roughly divided into three types:

1. Ballpark or order of magnitude: Here the estimate is probably an order of magnitude from the final figure. Ideally, it would fall within two or three times the actual value.

2. Rough estimates: Here the estimate is closer to the actual value. Ideally it will be about 50% to 100% off the actual value.

3. Fair estimates: This is a very good estimate. Ideally it will be about 25% to 50% off the actual value.

Deciding which of these three different estimates you can provide is crucial. Fair estimates are possible when you are very familiar with what needs to be done and you have done it many times before. This sort of estimate is possible when doing maintenance type work where the fixes are known, or one is adding well-understood functionality that has been done before. Rough estimates are possible when working with well-understood needs and one is familiar with domain and technology issues. In all other cases, the best we can hope for before we begin is order of magnitude estimates. Some may quibble than order of magnitude estimates are close to no estimate at all! However, they are very valuable because they give the organization and project team some idea of what the project is going to need in terms of time, resources, and money. It is better to know that something is going to take between two and six months to do rather than have no idea how much time it will take. In many cases, we may be able to give more detailed estimates for some items rather than others. For example, we may be able to provide a rough estimate of the infrastructure we need but only an order of magnitude estimate of the people and time needed.

Thursday, August 28, 2008

Automatically rename Foreign Keys on a DB

Introduction

This article explains how to rename automatically every relation in your database.It could be useful if your database was upgraded from a different DBMS and the relation names are meaningless (like the Access upgrade does) or if those names have been created in years from different developers using different standards or if you renamed one or more table in your database and you need to fix foreign keys' names also.

Background :

The idea (and the underlying algorithm) is simple:

Take all the relations in the database, look at the tables involved in the relation and give each one the name "FK_ParentTable_ForeignTable[Counter]".
With previous versions of SQL Server it was easier because the user could directly update (with a single statement) system catalogues but in SQL Server 2005 this feature was disabled for consistency reasons.

In SQL Server 2005 there are a lot of useful views lying over the system catalogues that let the user know about everything in every database. The code uses those views to accomplish the task.

Using the code:

The code is just a T-SQL block of code so you can Paste it in a "Management Studio" window and run it from there.Put it as a Stored Procedure body to call when needed.
Run from within a "database update" script...do whatever you would do to run a sql batch.

Points of Interest

This code makes use of some new SQL Server 2005 features.

To make the code simpler it was divided logically using Common table expressions (CTE).Moreover to count properly the foreign keys a ranking function is used.
So if you are new to these you can learn something :)

In depth look

The logic is simple: obtain a list of actual foreign keys on a DB and rename them using the sp_rename extended procedure. So the code is basically a query wrapped around a procedure code that loops on the result set and do the rename work. There's nothing important/special/difficult to point on the procedure.. the interesting part is the query that is explained in detail below.

First of all we need to obtain every foreign key present in our database.
The view sys.foreign_key_columns has the information on "what column is linked to what other column". We use this view to have the list of every distinct relation (a relation could take more than one column). The first CTE has this information.

Next we should translate object IDs into object names.This can be done joining the first CTE with the sys.objects view Additionally we can count how many times a parent is related to a referenced table.This CTE stores: the actual relation name, the parent table, the referenced table and the counter.

The third step is to translate the informations obtained in the second step to a more useful thing: Old relation name and New relation name.
The CASE is used to put or omit the counter if there are more than one relation or one only (you can easily modify it if you want different renaming scheme).

The fourth step is used to take in consideration (for the rename process) only the relation names that don't already exist (because maybe someone has already fixed some them manually or they were created with the right name)

Download Source Code

Content From : CodeProject

Introduction - FAST Taxonomy

The FAST Taxonomy Explorer contains Categorization, based on advanced Linguistic technologies that let you control the flow of information into your organization and order, access, and retrieve that data; as well as information created within your organization.

The Categorizer classifies documents and organizes information into a hierarchical or a flat set of categories.Categorization is the process of concisely defining the information within a particular doc-ument; in other words, the major topic or subject of the document. In the context of text-
based document searches, categorization is an automated process that classifies numerous text documents, placing them into a taxonomy. A taxonomy is an organized classification structure that facilitates information retrieval.The categorization process inserts category tags into the documents prior to indexing.

When the documents in an index have been categorized, end users can restrict a query to a specific category in that index. Categorizing documents increases the likelihood that your end users will obtain the meaningful results they seek for two reasons:

1. Metadata: Documents are organized and stored by category, according to
their metadata tags.

2. Filter: Queries can be filtered using the categories that you created as
part of your taxonomy.

You can also choose to categorize the end-user documents from among several languages.FAST have Taxonomy Explorer to create & test a catagory.

DOJO - Ajax Technology

Dojo is an Open Source DHTML toolkit written in JavaScript. It builds on several contributed code bases (nWidgets, Burstlib, f(m)), which is why we refer to it sometimes as a "unified" toolkit. Dojo aims to solve some long-standing historical problems with DHTML which prevented mass adoption of dynamic web application development.

Dojo allows you to easily build dynamic capabilities into web pages and any other environment that supports JavaScript sanely. You can use the components that Dojo provides to make your web sites more usable, responsive, and functional. With Dojo you can build degradable user interfaces more easily, prototype interactive widgets quickly, and animate transitions. You can use the lower-level APIs and compatibility layers from Dojo to write portable JavaScript and simplify complex scripts. Dojo's event system, I/O APIs, and generic language enhancement form the basis of a powerful programming environment. You can use the Dojo build tools to write command-line unit-tests for your JavaScript code. The Dojo build process helps you optimize your JavaScript for deployment by grouping sets of files together and reuse those groups through "profiles".

Dojo does all of these things by layering capabilities onto a very small core which provides the package system and little else. When you write scripts with Dojo, you can include as little or as much of the available APIs as you need to suit your needs. Dojo provides multiple points of entry, interpreter independence, forward looking APIs, and focuses on reducing barriers to adoption.

Download Dojo Tool Kit 1.1.1

Integrate With Code :

1. First add the Dojo script file before body tag
script type="text/javascript" src="js/dojo1.0/dojo/dojo.js"
djConfig="parseOnLoad:true, isDebug:true"


Dojo has a mechanism for setting various configuration options at runtime. The two most common are parseOnLoad, which toggles page-load parsing of widgets and in-markup code, and isDebug, which enables or disables certain debugging messages.

We can set these configuration in another way also.

script type="text/javascript"
var djConfig = {
isDebug:true, parseOnLoad:true
};
/script
script type="text/javascript" src="js/dojo1.0/dojo/dojo.js"

Open a Command Prompt Window From Within Windows Explorer

Follow these steps to enable this option in the right-click drop down menu in

Windows Explorer:

1. Open "Windows Explorer"
2. Tools menu / Folder Options
3. Select File Types tab
4. Find and highlight "Folder" in File Types
5. Click Advanced
6. Click New to add new action
7. Type action name: Command Prompt
8. Type in "Application used to perform the action": cmd /k cd

(It may be necessary to type in the full path to cmd such as C:\WINNT\system32\cmd.exe)

Wednesday, August 27, 2008

Oracle - ORA-12535 error

Issue :
I have two databases. One local (in domain ad.xyz.com) and a remote database (domain us.oracle.com). I am logging in to these from a client machine and I do not have DBA access.

I am able to TNSPING and connect using SQL*Plus to both of these databases. I have created a private DBlink in my local DB pointing to the remote DB. However, when I try to refer any object in the remote DB using this DBlink, I get the "ORA-12535: TNS:operation timed out " error. Since I am able to TNSPING and connect to the DBs, the .ora files are correct.

I have referred to all of the articles I could find on the Internet but these did not help in solving the issues. Can you please let me know what I may have missed out?

Solution : (Given By Brian Peasland)

The only thing TNSPING tells you is that the database listener is up and is configured for the SID defined in your tns string. It does not indicate whether or not you can actually connect to the Oracle instance. The most common reason why you are receiving the ORA-12535 error is due to a firewall configuration issue. While the Listener is listening on port 1521, the connection will use a different port. The firewall could be blocking this other port. You may need to work with your network administrators to resolve this issue.

Weblogic Portlet 3.2.1 With FAST ESP

FAST ESP gives pre defined BEA Weblogic 8.1,Oracle,IBM Websphere 5.0 search portlets.The Search Portlet for BEA WebLogic 8.1 allows a WebLogic web application to query and process search results from FAST Data Search. You can use the portlet as provided or modify the user interface to comply with your company preferences and policies.


Supported Features


1. Simple installation for use in Weblogic Workshop and Weblogic Portal
Administration.

2. Configurable preferences to customize the portlet.

3. Access to many search features in FAST Data Search.

4. Java Server Pages (JSPs) that can be customized for the application.

Supported Versions :

BEA Weblogic Platform 8.1
FAST Data Search 3.2

Portlet Samples :





No need to write a single line .... If you want,You can customize the code based on your requirement.

Tuesday, August 26, 2008

DbXplorer - Oracle Weblogic 10.3 g

We can connect database schema using DbXplorer.
in this article we will learn how to explore databases using the DbXplorer™, a view that provides an intuitive interface for database access through the ORM Workbench. Using the DbXplorer, you can setup a database connection, add and edit data, review the database artifacts, query the data in an existing table or column, and generate object relational mappings.

Create a New Database Connection :

1. Click on the DbXplorer view tab, if it is visible. If not, open the DbXplorer
view by clicking Window > Show View > DbXplorer.
2. Right-click anywhere within the DbXplorer view and select New Connection.



3. In the Add Database Connection wizard, enter a database connection name. The database connection name can be arbitrary and does not have to match the actual name of the database server. Click Next to proceed.


4. In the Add Database Connection dialog, click Add and select the Hypersonic JDBC driver file, \workshop-jpa-tutorial\web\WEB-INF\lib\hsqldb.jar.


5. Click Next

6. In the JDBC Driver Class field click Browse and select org.hsqldb.jdbcDriver.

7. Workshop provides sample Database URL's for some standard databases, which can be accessed from the Populate from database defaults pull down menu. Select HypersonicSQL In-Memory.


8. For database URL jdbc:hsqldb:{db filename}, specify the Hypersonic database script file location for {db filename}: \workshop-jpa-tutorial\web\hsqlDB\SalesDB .

9. For User, enter sa.



10. Click the Test Connection button to verify the connection information.


11. Click Finish. The new database connection displays in the DbXplorer view.


After we can navigate trough all the tables which is avialable in database.

DBXaminar helps you to see the ralationship between the table.
Ex:

Oracle Workshop for WebLogic 10g R3

Oracle Introduced Weblogic 10.3g with lot of additional features.And Workshop version 10.3 is supported by Windows Vista also.

Added Features :

1. Support for Java Enterprise Edition 5
Workshop support for Java EE5 includes support for the following technologies:

Servlet 2.5
JSP 2.1
JSF 1.2
JSTL 1.2
EJB 3.0
JAX-WS
JAXB 2.0

2. Built on Eclipse 3.3.2 and Web Tools Platform 2.0.3
3. Supported by Windows Vista
4. XMLBeans 2.3
5. WorkSpace Studio launcher has been discontinued.
6. Provide a tool to Upgrade from 8,9.2 to 10.3g
7. DbXplorer and DBXaminer for working with databases and etc..

If you want more information ,check here:

Oracle Workshop for WebLogic™10.3
Download Weblogic Server 10.3g
Download Free Oracle Weblogic Worksho 10.3g
Upgarding Tutorial from 8.1 to 10.3g
Oracle 10.3g Tutorial

Oracle gives this software for free of cost now .........

Convert a Java Application to .NET

Yes.We can convert class files,jars files in to .Net Executable file format (dll) using the ikvmc tool.

IKVM.NET includes ikvmc, a utility that converts Java .jar files to .NET .dll libraries and .exe applications. In this section, we'll convert a Java application to a .NET .exe.

Navigate to IKVMROOT\samples\hello and enter the following:

ikvmc hello.jar

After the command completes, you should find a hello.exe file in the current directory. To execute it:

Windows / .NET Framework:
Try running hello.exe. If you get a FileNotFound exception when the .NET runtime attempts to load the referenced IKVM.OpenJDK.ClassLibrary.dll, remember that the .NET Framework expects to find referenced dll's in the application directory or in the Global Assembly Cache. Either install the dll's in the Global Assembly Cache, or copy them to the application directory.

Linux / Mono:

Run it using the following command:

mono hello.exe


For more details find the following links :

IKVMC.NET Tutorial
Download IKVMC.NET

Saturday, August 23, 2008

Regular Expression

Basically, a regular expression is a pattern describing a certain amount of text. Their name comes from the mathematical theory on which they are based. But we will not dig into that. Since most people including myself are lazy to type, you will usually find the name abbreviated to regex or regexp. I prefer regex, because it is easy to pronounce the plural "regexes". On this website, regular expressions are printed as regex. If your browser has proper support for cascading style sheets, the regex should be highlighted in red.

\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b

Analyze this regular expression with RegexBuddy is a more complex pattern. It describes a series of letters, digits, dots, underscores, percentage signs and hyphens, followed by an at sign, followed by another series of letters, digits and hyphens, finally followed by a single dot and between two and four letters. In other words: this pattern describes an email address.

With the above regular expression pattern, you can search through a text file to find email addresses, or verify if a given string looks like an email address.

Grabbing HTML Tags:

]*>(.*?) Analyze this regular expression with RegexBuddy matches the opening and closing pair of a specific HTML tag. Anything between the tags is captured into the first backreference. The question mark in the regex makes the star lazy, to make sure it stops before the first closing tag rather than before the last, like a greedy star would do. This regex will not properly match tags nested inside themselves, like in onetwoone.

<([A-Z][A-Z0-9]*)\b[^>]*>(.*?) Analyze this regular expression with RegexBuddy will match the opening and closing pair of any HTML tag. Be sure to turn off case sensitivity. The key in this solution is the use of the backreference \1 in the regex. Anything between the tags is captured into the second backreference. This solution will also not match tags nested in themselves.

IP Addresses
:

Matching an IP address is another good example of a trade-off between regex complexity and exactness. \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b will match any IP address just fine, but will also match 999.999.999.999 as if it were a valid IP address. Whether this is a problem depends on the files or data you intend to apply the regex to. To restrict all 4 numbers in the IP address to 0..255, you can use this complex beast:
\b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b (everything on a single line). The long regex stores each of the 4 numbers of the IP address into a capturing group. You can use these groups to further process the IP number.

If you don't need access to the individual numbers, you can shorten the regex with a quantifier to: \b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b Analyze this regular expression with RegexBuddy. Similarly, you can shorten the quick regex to \b(?:\d{1,3}\.){3}\d{1,3}\b

Wednesday, July 30, 2008

FAST : nCtrl

nCtrl is an executable file to control the FAST Server Node.It resides in FAST_HOME/bin.nCtrl gets the data's from the NodeConfig.xml which is resides in FAST_HOME/etc directory

Commands :

nCtrl [Options] [Commands]

Status -> Display Process/Modules details which is available in this Node.
Ex : nCtrl status

Start : - > Start the Process/Modules
Ex : nCtrl start [process1] [process2]...

Stop : - > Stop the Process/Modules
Ex : nCtrl stop [process1] [process2]...

Kill : - > Force to kill the Process/Modules
Ex : nCtrl kill [process1] [process2]...

Suspend : - > Suspend the Process/Modules
Ex : nCtrl suspend [process1] [process2]...

Resume : - > Resume the Process/Modules which is previously suspended
Ex : nCtrl resume [process1] [process2]...

Create : - > Create a new Process/Modules
Ex : nCtrl create [process1]

Reload Config : - > Reload the NodeConfig.xml.
Ex : nCtrl reloadcfg

FAST : The System Error 1069 has occured

This error comes because of the user Privileges.
During the installation of the FAST Server,it will automatically create 2 windows services(FAST SEARCH,FAST Web Server).It resides in OS start up scripts.

User Should have the full admin Privileges to start these services.If it doesn't have it,system though out the error message like "System Error 1069 has occured".

Solution :

Users are added in Local User's & Groups when they logged in with OS.

-> Contrl Panel -> Adminstrative Tools -> Computer Management
-> Local User's & Groups -> Users.

Just click the corresponding user to view/edit their Credentials.

Right Click -> Properties


Here we can see five options like :
1. User Must Change Password in Next Login
2. User Can't Change Password
3. Password Never Expire
4. Account is Disabled
5. Account Locked Out

To start these services we should change the follwing credentials like :
-> Password Never Expired : Checked
-> Account Locked Out : UnChecked

Tuesday, July 22, 2008

Search Using FAST Search API

FAST Enterprises Search API gives the interface to communicate with the ESP(Enterprises Search Platform) Admin.The following way we can do the search.


1. We need the ESP Admin server IP address/ Domain Name and Port Number.

EX : String host = "172.19.61.104";
String port = "15100";

2. Then Indentify the Collection where the data's resides.

EX : String viewName = "espsystemwebcluster";

Note :The Collection Should be create in ESP Admin bfeore run the program.

3. Create the Property file to create a seasrch Factory Instances.
EX :
Properties p = new Properties();
p.setProperty("com.fastsearch.esp.search.SearchFactory",
"com.fastsearch.esp.search.http.HttpSearchFactory");
p.setProperty("com.fastsearch.esp.search.http.qrservers", hostport);

ISearchFactory searchFactory = SearchFactory.newInstance(p);

Note :qrservers - A list of QRServers that you want to connect to.It's Mandatory Property.You can map multiple servers in the same type like "qrserver1.site.com:15100, qrserver2.site.com:15100".

HttpSearchFactory - It have Some other Optional Proerties.
RequestMethod -> "GET"/"POST" (Default : "GET")
CertiticateFile -> When Use the SSL Certificate
KeepAlive -> When use the Prsistance Connections is used.

4. Once the Search Factory Instances has been created,we can get the Views of the collection.Using this view Object ,we can do the search.

EX : ISearchView view = searchFactory.getSearchView(viewName);

5. FAST ESP API give the multiple of ypes of Search Option.You can set the option with the FAST Search Query.By default ESP have their own Query Language (FQL).

EX : String query = query.replaceAll("\"", "\\\"");
String fql = "string(\"" + query + "\", mode=simpleany)";

Mode - Specifies the search option.

6. Using IQuery interface we can create the FQL Query.

EX : IQuery theQuery = new Query(fql);

7. The Pass this query to view object ,it will return the Result set.

Ex : IQueryResult result = view.search(theQuery);

This Result Interface have all the documents which is related to your query.

FAST Introduction

FAST ESP is an integrated software application that provides a platform for searching and filtering services.It is a distributed system that enables information retrieval from any type of information. ESP combines real-time searching, advanced linguistics, and a variety of content access options into a modular, scalable
product suite.

FAST ESP does the following:

1. Retrieves or accepts content from web sites, file servers, application-specific content systems, and direct import via API
2. Transforms all content into an internal document representation
3. Analyzes and processes these documents to allow for enhanced relevancy
4. Indexes the documents and makes them searchable
5. Processes search queries against these documents
6. Applies algorithms or business rule-based ranking to the results
7. Presents the results along with the navigation options.

Monday, July 21, 2008

FAST - Search Engine

FAST is the leading global provider of enterprise search technologies and solutions that are behind the scenes at the world's best known companies. FAST's flexible and scalable enterprise search platform (FAST ESP) elevates the search capabilities of enterprise customers and connects people to the relevant information they seek regardless of medium. This drives revenues and reduces total cost of ownership by effectively leveraging IT infrastructure. FAST's solutions are used by more than 2,600 global customers and partners, including America Online (AOL), Cardinal Health, CareerBuilder.com, CIGNA, CNET, Dell, Factiva, Fidelity Investments, Findexa, IBM, Knight Ridder, LexisNexis, Overture, Rakuten, Reed Elsevier, Reuters, Sensis, Stellent, Tenet Healthcare, Thomas Industrial Networks, Thomson Scientific, T-Online, US Army, Virgilio (Telecom Italia), Vodafone, and Wanadoo.

FAST is headquartered in Norway and is publicly traded under the ticker symbol 'FAST' on the Oslo Stock Exchange. The FAST Group operates globally with presence in Europe, the United States, Asia Pacific, Australia, South America, and the Middle East.

In January, Microsoft made an accepted offer to acquire FAST for $1.2 billion.

FAST SEARCH - Home

Sealed Class

A Sealed is a modifer in .NET.The sealed modifier is equivalent to marking a class with the final keyword in Java.We can use this for the following reasons.

1. when youn want to provide security for the class.you declare sealed class.If you declare sealed class the class can not inerited to other class.
2. The sealed modifier is used to prevent derivation from a class. An error occurs if a sealed class is specified as the base class of another class. A sealed class cannot also be an abstract class.


using System;
sealed class SealedClass{
public int x;
public int y;
}

class MainClass{
static void Main(){
SealedClass sc = new SealedClass();
sc.x = 110;
sc.y = 150;
Console.WriteLine("x = {0}, y = {1}", sc.x, sc.y);
}
}


Output :
x = 110, y = 150

Sunday, July 20, 2008

Abstract Class

1. Abstract classes are classes that contain one or more abstract
methods.
2. An abstract method is a method that is declared, but contains
no implementation.
3. Abstract classes may not be instantiated, and require subclasses to
provide implementations for the abstract methods.


Let's look at an example of an abstract class, and an abstract method.

Suppose we were modeling the behavior of animals, by creating a class hierachy that started with a base class called Animal. Animals are capable of doing different things like flying, digging and walking, but there are some common operations as well like eating and sleeping. Some common operations are performed by all animals, but in a different way as well. When an operation is performed in a different way, it is a good candidate for an abstract method (forcing subclasses to provide a custom implementation). Let's look at a very primitive Animal base class, which defines an abstract method for making a sound (such as a dog barking, a cow mooing, or a pig oinking).

public abstract Animal
{
public void eat(Food food)
{
// do something with food....
}

public void sleep(int hours)
{
try
{
// 1000 milliseconds * 60 seconds * 60 minutes * hours
Thread.sleep ( 1000 * 60 * 60 * hours);
}
catch (InterruptedException ie) { /* ignore */ }
}

public abstract void makeNoise();
}


Note that the abstract keyword is used to denote both an abstract method, and an abstract class. Now, any animal that wants to be instantiated (like a dog or cow) must implement the makeNoise method - otherwise it is impossible to create an instance of that class. Let's look at a Dog and Cow subclass that extends the Animal class.

public Dog extends Animal
{
public void makeNoise() { System.out.println ("Bark! Bark!"); }
}

public Cow extends Animal
{
public void makeNoise() { System.out.println ("Moo! Moo!"); }
}

Now you may be wondering why not declare an abstract class as an interface, and have the Dog and Cow implement the interface. Sure you could - but you'd also need to implement the eat and sleep methods. By using abstract classes, you can inherit the implementation of other (non-abstract) methods. You can't do that with interfaces - an interface cannot provide any method implementations.

Wednesday, July 9, 2008

OutOfMemory Error

Sometime we can get the out of memory error when we build the large application ,because of the low heap memory of the weblogic build process.To fix this isssue do the following.

1. Go to WL_HOME/workshop directory
2. Edit the Wlwbuild.cmd file.
3. Change the -Xmx1024m -Xms1024m -Xss512k properites.

-Xms - Minimum Heap Size
-Xmx - MAximum Heap Size

Then restart the build process ,it will work fine.

Tuesday, July 8, 2008

Localization & Internationalization

Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes. Sometimes the term internationalization is abbreviated as i18n, because there are 18 letters between the first "i" and the last "n."

An internationalized program has the following characteristics:

1. With the addition of localized data, the same executable can run worldwide.
2. Textual elements, such as status messages and the GUI component labels, are not hardcoded in the program. Instead they are stored outside the source code and retrieved dynamically.
3. Support for new languages does not require recompilation.
4. Culturally-dependent data, such as dates and currencies, appear in formats that conform to the end user's region and language.
It can be localized quickly.

Sample Code :

1. First get the Locale data's like country language.

String language="en";
String country="US";
currentLocale = new Locale(language, country);

2. Before that we should create the message bundle (It's nothing but properties file which is include the content of page)

MessagesBundle.Properties
MessagesBundle_de_DE.properties
MessagesBundle_en_US.properties

3. Once it's created ,you can access the message bundle based on locale using the ResourceBundle.

ResourceBundle messages =ResourceBundle.getBundle
("MessagesBundle",currentLocale);


System.out.println(messages.getString("greetings"));

JSP Content Type - MIME Type

Some JSP pages are designed so they can deliver content using different content types (and character sets) depending on request time input. These pages may be organized as custom actions or scriptlets that determine the response content type and provide glue into other code actually generating the content of the response.

The initial content type for the response (including the character set) is determined as shown in the "Output Content Type" column in Table JSP.3-1. In all cases, the container must call response.setContentType() with the initial content type before processing the page.

The content type (and character set) can then be changed dynamically by calling setContentType() or setLocale() on the response object. The most recent call takes precedence. Changing the content type can be done up until the point where the response is committed. Data is sent to the response stream on buffer flushes for buffered pages, or on encountering the first content (beware of whitespace) on unbuffered pages. Whitespace is notoriously tricky for JSP Pages in JSP syntax, but much more manageable for JSP Documents in XML syntax.

Default JSP Type :


language CDATA "java"
extends %ClassName; #IMPLIED
contentType %Content; text/xml; UTF-8
import CDATA #IMPLIED
session %Bool; true
buffer CDATA 8kb
autoFlush %Bool; true
isThreadSafe %Bool; true
info CDATA #IMPLIED
errorPage %URL; #IMPLIED
isErrorPage %Bool; false
>


1. is "text/html" for JSP Pages in standard syntax, or "text/xml" for JSP Documents in XML syntax.

2. is "ISO-8859-1" for JSP Pages in standard syntax, or "UTF-8" for JSP Documents in XML syntax.

3. is "ISO-8859-1" for JSP Pages in standard syntax, or "UTF-8" or "UTF-16" for JSP Documents in XML syntax (depending on the type detected as per the rules in the XML specification). Note that, in the case of include directives, the default input encoding is derived from the initial page, not from any of the included pages.


How set the Content Type in JSP :

Using the setContentType() of the response we can change the MIME type and encoding format.

EX :
response.setContentType("text/html");

To check Pre-defined Content Type : click Here

Dynamically Generate PDF

We can generate the PDF dynamically.Here i will explain about the PDF geneartion with the iText Library.

History Of iText:

iText is a library that allows you to generate PDF files on the fly.

iText is an ideal library for developers looking to enhance web- and other applications with dynamic PDF document generation and/or manipulation. iText is not an end-user tool. Typically you won't use it on your Desktop as you would use Acrobat or any other PDF application. Rather, you'll build iText into your own applications so that you can automate the PDF creation and manipulation process.

For instance in one or more of the following situations:
Due to time or size, the PDF documents can't be produced manually.
The content of the document must be calculated or based on user input.
The content needs to be customized or personalized.
The PDF content needs to be served in a web environment.
Documents are to be created in "batch process" mode.
You can use iText to:
Serve PDF to a browser
Generate dynamic documents from XML files or databases
Use PDF's many interactive features
Add bookmarks, page numbers, watermarks, etc.
Split, concatenate, and manipulate PDF pages
Automate filling out of PDF forms
Add digital signatures to a PDF file

Process :

1. Decide where store the PDF.In web Apllication it should be stored in server.

String obsPath = this.getRequest().getContextPath();
String RealPath = this.getRequest().getRealPath(obsPath);
File ptfFile=null;
String path = RealPath.substring(0,RealPath.lastIndexOf(File.separator));
ptfFile = new File(path + File.separator + "Generated PDF");
if(!ptfFile.exists()){
ptfFile.mkdir();
}
ptfFile = new File(path + File.separator +"Generated PDF"+
File.separator+"test"+".pdf");


The above code check once before create the pdf if already exists or not.If exists it will updat the existing PDF file.

2. Create the Document and PDFWriter Object.

Document document = null;
PdfWriter writer=null;
document = new Document(PageSize.A4);
writer=PdfWriter.getInstance(document, new FileOutputStream(ptfFile));

3. Open the Document to write the content of the PDF.

document.open();
document.setMargins(document.leftMargin()-40, document.rightMargin()-40,
document.topMargin()+30, document.bottomMargin() -
18);
document.newPage();
form.setFilePath("Generated PDF"+ File.separator+"test"+".pdf");
Phrase phrase = new Phrase(new Chunk(form.getFilePath(),font8));
document.add(phrase);
document.close();


You can set the margine of the PDF using setMargine() function of the document object.

Now the PDF has been generated and stotred in the Specified path.you can open the PDF when the user needs.

Monday, July 7, 2008

Backing File

Backing files allow you to programatically add functionality to a portlet by implementing a Java class, which enables preprocessing prior to rendering the portal controls. It can be attached to portals either by using WebLogic Workshop or coding them directly into a .portlet file.

It's simple Java classes that implement the com.bea.netuix.sevlets.controls.content.backing.JspBacking interface or extend the com bea.netuix.servlets.controls.content.backing.AbstractJspBacking abstract class.

This Backing File supports the following control

Desktops
Books
Pages
Portlets

All backing files are executed before and after the JSP is called.Each Backing files are call the following methods.

init()
handlePostBackData()
raiseChangeEvents()
preRender()
dispose()

Example :

package backing;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import com.bea.netuix.events.Event;
import com.bea.netuix.events.CustomEvent;
import com.bea.netuix.servlets.controls.content.backing.AbstractJspBacking;
public class cashierEntry extends AbstractJspBacking{
public void getcashierEntryDetails ( HttpServletRequest request, HttpServletResponse response, Event event){
CustomEvent customEvent = (CustomEvent) event;
String message = (String) customEvent.getPayload();
HttpSession mySession = request.getSession();
mySession.setAttribute("customerName", message);
}
}

Once you create the backing file set the same to portlet/Page/Book by property.



Otherwise you can directly add the backing file with the .portlet file.


contentUri="/com/SBIlife/portlets/ss/homeEntry.jsp"/>


Using this Backing file ,you can effectily do the validation for portlet/Book/Page.

Navigating with Servlets

The first thing to understand is how a servlet begins and ends. Similar to an applet, the life cycle of the servlet begins with an automatic call to its init method. The init method is called once by the server for initialization and not again for each user request. Because the server creates a single class instance that handles every request of the servlet, performance is greatly improved, eliminating the need to create overhead that would otherwise be necessary if each request required the server to create a new object.

Next, the service method is called, performing the work of the servlet, and passing ServletRequest and ServletResponse objects as needed. These objects collect or pass information such as the values of the named attributes, theIP address of the agent, or the port number on which the request was received.

Lastly, like an applet, it is time to remove a previously loaded servlet instance, the servlet's destroy method is called. This gives the servlet a chance to close database connections, save information to a log file, or perform other cleanup tasks before it is shut down. If you have special cleanup tasks you'd like your servlet to perform before being removed from memory, the destroy is the place to write those instructions.

All three methods, init, service, and destroy, can be overridden.

If you need a servlet to load with customized initialization behavior, you can override the init method using either of the following two formats:

No argument format:

public void init()throws ServletException{

Takes ServletConfig object:

public void init(ServletConfig config)
throws ServletException {
super.init(config);
//Initialization code . .


The latter is used when the servlet needs to be initialized with server information such as:

1. Password files
2. A hit count number
3. Serialized cookie information
4. Data from previous requests

When using the ServletConfig format, create a call to the super.init so that the super class registers the information where the servlet can find it later.

Which format you use depends on what information needs to be known at initialization time. If no information is needed when the servlet is first invoked, then the no argument format may be used.

The Heart of the Servlet

The javax.servlet package and the javax.servlet.http package provide the classes and interfaces to define servlets. HTML servlet classes extend the javax.servlet.http.HttpServlet abstract class, which provides a framework for handling HTTP protocol. Because HttpServlet is abstract your servlet must extend it and override at least one of the following methods:

1. doGetget information such as a document, the results of a database query, or strings passed from the browser.
2. doPost posts information such as data to be stored in a database, user login and password information, or other strings passed from the browser.
3. doPutplaces documents directly on the server.
doDelete deletes information or documents from the server.
4. getServletInfo returns descriptive information about the servlet, possibly its purpose, author, or version number.

These methods are the heart of the servlet, where instructions and the purpose of the servlet are carried out. You will likely only need to use, or override, a few of the methods. RedirectServlet overrides doGet and doPost, but does not need any of the other methods.

public class Example extends HttpServlet {
public void doGet ( HttpServletRequest req,
HttpServletResponse res)

When a client calls a servlet by typing the URL in the browser, submitting a form, or clicking a button on a menu, the servlet's service method checks the HTTP request type, such as POST or GET. This in turn calls doGet, doPOST, doPUT, or doDelete as needed.

You can override the service method without implementing doGet and doPost, but it's generally better to call both doGet and doPost. See the JDC RedirectServlet.

The doPost and doGet Methods

The doPost or doGet methods instruct the server about what it must do, whether printing information back to the client's browser, writing to a database, or simply redirecting the client to a requested URL. Within these methods you will use Java programming syntax to give specific instructions for the servlet to interact between the client and the server.

Servlets to Process Forms

A form is a powerful web site tool, enabling clients to take polls, enter personal information for online shopping, or subscribe to an e-newsletter. In other words, forms turn static web pages into dynamic pages. But a form cannot give instructions to a server. Without an application between the form and the server, the form is useless.

Servlets process form information in three steps:

1. Retrieve or request the data
2. Store or passing the data
3. Respond to the request

RedirectServlet Servlet

As with most servlets, the JDC RedirectServlet imports the following packages:

import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;

Servlets extend the abstract HttpServlet class, whichextends the GenericServlet base class.

public class RedirectServlet extends HttpServlet {
So the servlet collects the parameter NAME and corresponding value as a String pair, the String object must be declared:

private String paramName;

To initialized the servlet with the NAME and value String pair, the init method is overridden, and the ServletConfig object is passed as a parameter. Because errors can occur, such as a bad URL, the throws ServletException is included.

public void init(ServletConfig config)
throws ServletException {

When overriding the init method, call super.init(config). After the call to super.init(config), the servlet can invoke its own getInitParameter method as shown.The getInitParameter method returns a string containing the value of the named initialization parameter, or null if the requested parameter does not exist. Init parameters have a single string value.

super.init(config);
paramName =
config.getInitParameter("paramName");
}

A servlet must override doGet or doPost, depending on whether data is sent by POST or GET in the HTML form. The drawback to overriding only one of these methods is that if production changes the HTML form to call POST instead of GET, the servlet won't work. Generally, it is better to override both methods, as shown below.

RedirectServlet overrides the doGet method and takes two arguments:

HttpServletRequest, with the variable req

HttpServletRequest has useful methods such as getParameter that takes the value of a NAME attribute as a string or null if the parameter does not exist.

HttpServletResponse with the variable res has methods that let you specify outgoing information, such as getWriter, which returns a print writer for writing formatted text responses to the browser, or in this case sendRedirect, which sends a temporary redirect response to the client using the specified redirect location URL.

public void doGet(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException {
response.sendRedirect(
request.getParameter(paramName));

The doGet method calls the getParameter method through the request object, passing in the paramName object and its value as a string pair, in this case url and the actual URL. A special sendRedirect method then passes the string to the browser and redirects the user to the desired destination.

The RedirectServlet also forces doPost to call doGet, making it possible to use GET or POST in the HTML form without breaking the servlet.


}
public void doPost(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException{
doGet(request, response);
}
}

As long as the value is not null, the code moves onto the res object and its sendRedirect method, which redirects the client to the URL that was passed to the method.This makes for smooth navigation when a client's browsers cannot support JavaScript, or the user has JavaScript toggled off.

Web site design issues frequently pose challenges because of browser incompatibility, user preferences, and non-browser problems, such as sending information to a database or email address. Servlets serve functionality in these situations, acting as a messenger between HTML pages and the server, or as back-up for other technologies.

RedirectServlet is a short yet reliable servlet that works as back-up to JavaScript, and it doesn't need to deal with browser compatibility, or other client related issues because it works on the server side.

Workshop for WebLogic 10.3

Oracle Workshop for WebLogic 10.3 is celebrating independence day. This year, I'm personally celebrating a long awaited declaration: that this upcoming release introduces a complete freedom from pricing, licensing and registration of any kind. Complete freedom for using your favorite dev/test server. Consistent with Oracle's Hot Pluggable initiative, all features of the Eclipse IDE will be freely available on all supported platforms, including websphere, weblogic, tomcat, jboss, jetty and resin.

In addition, JDeveloper and ADF / TopLink runtime users will be supported on Oracle WebLogic Server 10.3, allowing ADF driven application to extend farther into new supported platforms.

Oracle WebLogic Server 10.3 developers who use Eclipse will find updated Workshop plug-ins for developing Java/EE and web services that are bundled with the server:

•Support for new industry standards

•IDE based on Eclipse 3.3 & WTP 2.0

•Support for JDK 6

•Windows Vista support

•XMLBeans 2.3 support

•New Web Services Support

•JAX-WS tooling for Weblogic Server 10.3

•Design/Build/Deploy Support

•Start from Java or from WSDL

•JAX-B support with new JAX-B Wizard

•Create JAX-B types from schema

•Generate ant snippets

•New ClientGen Wizard

•Create Web Service Clients from JAX-RPC & JAX-WS Web Services

•Generate ClientGen ant snippets

•Updated JAX-RPC support for Weblogic 10.3

•Support for EE5 Standards

•New EE5 Project Creation

•Create EE5 EAR and EJB Projects

•Create Web Applications based on new standards

•Servlet 2.5

•Full support for new Servlet spec, including optional web.xml

•JSP 2.1, JSF 1.2, JSTL 1.2

•Updated wizards and tag support for new standards (SunRI and Apache myFaces)

•WYSIWYG and AppXRay support for Universal Expression Language

•New Weblogic Server Value Add

•Full Deploy/Debugging support for WLS 10.3

•Continued backward compatibility for WLS 8.1, 9.2, 10.0

•Remote Deployment

•Supports WLS 9.2, 10.0, and 10.3

•Support for new WLS Fast Swap

•New Editors and Wizards for Weblogic Server Deployment Descriptors

•Application upgrade tools for older versions of Weblogic

Adaptive Memory Management for Virtualized Java Environments

In virtualized environments based on hypervisor technology like VMware’s Virtual Infrastructure 3, adaptive memory management within the Java Virtual Machine plays an important role in optimizing the performance of Java applications. This becomes particularly apparent when multiple instances of an application are run within a memoryconstrained environment.
BEA LiquidVM technology is unique in its ability to respond to changes in memory pressure on the underlying hypervisor infrastructure by changing its heuristics and behavior to match its runtime environment. In virtualized environments running enterprise Java applications, the adaptive memory management of BEA LiquidVM technology can allow up to two times the number of virtual machine instances to be run without any external reconfiguration, resulting in much higher application throughput than is achievable with a standard, OS-based software stack.

Sunday, July 6, 2008

PointBase Console in Weblogic

You can administer the default database installed with WebLogic Server (PointBase) using the PointBase administrative console, or any third party database visualization and management tool that can connect via JDBC.

To Launch PointBase Console from the Windows Start Menu.

1. Ensure that WebLogic Server is running. You will not be able to use Pointbase unless WebLogic Server is running.

2. From the Start menu, choose Programs-->BEA WebLogic Platform 8.1-->Examples-->WebLogic Workshop-->PointBase Console.

When the console starts, it prompts you to enter connection parameters to properly connect to the database. Enter the following connection information, which is also what you will need if you use a third-party product to access the PointBase database:

Driver: com.pointbase.jdbc.jdbcUniversalDriver
URL: jdbc:pointbase:server://localhost:9093/workshop
User : weblogic
Password : weblogic

PointBase stores all data in .dbn files and all log information in .wal files. Database properties are stored in PointBase.ini files. Data files for WebLogic Portal are named workshop.dbn and log files for WebLogic Portal are named workshop$1.wal. Pre-built PointBase data, log, and PointBase.ini files for WebLogic Portal samples are included in the following directory:

<%WL_HOME %>\user_projects\domains\<%DomainName%>

Add Logging at Class Load Time with Java Instrumentation

When you're trying to analyze why a program failed, a very valuable piece of information is what the program was actually doing when it failed. In many cases, this can be determined with a stack trace, but frequently that information is not available, or perhaps what you need is information about the data that was being processed at the time of failure.

Traditionally this means using a logging framework like log4j or the Java Logging API, and then writing and maintaining all necessary log statements manually. This is very tedious and error-prone, and well-suited for automation. Java 5 added the Java Instrumentation mechanism, which allows you to provide "Java agents" that can inspect and modify the byte code of the classes as they are loaded.

This article will show how to implement such a Java agent, which transparently will add entry and exit logging to all methods in all your classes with the standard Java Logging API. The example used is Hello World:

public class HelloWorld {
public static void main(String args[]) {
System.out.println("Hello World");
}
}



And here is the same program with entry and exit log statements added:


import java.util.Arrays;
import java.util.logging.Level;
import java.util.logging.Logger;

public class LoggingHelloWorld {
final static Logger _log = Logger.getLogger(LoggingHelloWorld.class.getName());

public static void main(String args[]) {
if (_log.isLoggable(Level.INFO)) {
_log.info("> main(args=" + Arrays.asList(args) + ")");
}
System.out.println("Hello World");
if (_log.isLoggable(Level.INFO)) {
_log.info("< main()");
}
}
}



The default logger format generates output similar to:

2007-12-22 22:08:52 LoggingHelloWorld main
INFO: > main(args=[])
Hello World
2007-12-22 22:08:52 LoggingHelloWorld main
INFO: < main()


Note that each log statement is printed on two lines. First, a line with a time stamp, the provided log name, and the method in which the call was made, and then a line with the provided log text.

The rest of the article will demonstrate how to make the original Hello World program behave like the logging Hello World by manipulating the byte code when it is loaded. The manipulation mechanism is the Java Instrumentation API added in Java 5.

Launch Java Applications from Assembly Language Programs

Java Native Interfaces (JNI) is a mechanism that can be used to establish communication between native language programs and the Java virtual machine. The documentation for JNI and the technical literature on JNI deal extensively with interactions between the JVM and C/C++ code. The Java SDK even provides a utility to generate a header file to facilitate calling C/C++ programs from Java code. However, there is hardly any mention of Java and assembly language code working together. In an earlier article I showed how assembly language programs can be called from Java applications. Here I deal with the technique for invoking Java programs from an ASM process through a demo application that calls a Java method from assembly language code. The Java method brings up a Swing JDialog to show that it has, indeed, been launched.

Why Java with ASM?

JNI is essential to the implementation of Java, since the JVM needs to interact with the native platform to implement some of its functionality. Apart from that, however, use of Java classes can often be an attractive supplement to applications written in other languages, as Java offers a wide selection of APIs that makes implementation of advanced functions very simple.

Some time ago, I was associated with an application to collect real-time data from a number of sources and save them in circular buffers so that new data would overwrite old data once the buffer got filled up. If a designated trigger event was sensed through a digital input, a fixed number of data samples would be saved in the buffers so that a snapshot of pre- and post-trigger data would be available. The original application was written in assembly language. After the application was used for a few months, it was felt that it would be very useful to have the application mail the snapshots to authorized supervisors whenever the trigger event occurred. Of course, it would have been possible to write this extension in assembly, but the team felt that in that particular instance it was easier to write that extension in Java and hook it up with the ASM program. As I had earlier worked with ASM-oriented JNI, I knew this could be done and, indeed, the project was implemented quickly and successfully.

I am sure there are many legacy applications written in assembly language that could benefit from such add-ons. However, it is not only for old applications in need of renovation that JNI can prove useful. Although it may seem unlikely to some of us, assembly language is still used for writing selected portions of new programs. In an article published not very long ago, the author says, "I have found that many of Sun's partners still use assembly language in their products to ensure that hot code paths are as efficient as possible. While compilers are able to generate much more efficient code today, the resulting code still doesn't always compete with hand-coded assembly written by an engineer that knows how to squeeze performance out of each microprocessor instruction. Assembly language remains a powerful tool for optimization, granting the programmer greater control, and with judicious use can enhance performance." Clearly, in such "mixed language" applications the ability to use Java with ASM can be useful.

Note that the technique shown here can also be used to call Java code from languages other than ASM. If JInvoke is rewritten as a .dll, code written in FORTRAN, for instance, can link to it and call a Java method.

I have used JNI with legacy ASM code in two ways:

1. Functional enhancement: Mail-enabling an existing ASM application, as
mentioned earlier.
2. Interface enhancement: Adding interactive user interface (mostly AWT, but
some Swing as well).

These enhanced applications have run on Windows 2000 and XP. The Java versions used were 1.3, 1.4, and 1.6. In all cases the applications worked smoothly.

Sunday, June 15, 2008

Enterprise Portals

Portals are first and foremost a user interface paradigm. Portal user interfaces divide the browser into the following components:

1. Header - the top-most section of the browser page, contains the branding
for the portal
2. Portal pages - accessed via page tabs at the top of the browser page
3. Portlets - rectangular areas on the page, each one usually representing
an application or a task
4. Navigation - possibly a left navigation box, or a menu for navigating to
different pages
5. Footer - the bottom-most section of the browser page, contains
disclaimers,While not all of these components are necessary in a
product offering, the major components are as follows:

6. Presentation Services - the user interface rendering engine
7. Federation Fabric - the capability to deploy portals and portlets in a
distributed manner
8. Enterprise Integration - support for Web service and SOA technologies
9. Intelligent Administration - the ability to dynamically administer the
security and layout of a deployed portal
10. Development Framework - an application development environment that
provides consistency in the implementation of Web
applications across the enterprise
11. Content - the ability to manage documents and connect to external content
management systems
12. Search - providing a comprehensive search function across the entire
portal Collaboration - providing tools to enable users to
collaborate, like discussion forums and a group calendar

Enterprise portal products are therefore sizable pieces of software. They not only provide user interface capabilities, but they also provide major features in support of portal initiatives.

Enterprise portal vendors :

Vendors have long been supporting enterprise portal initiatives with portal product offerings. Most of the major vendors in the space have been delivering product for 8 to 10 years. While the enterprise portal market is not as mature as databases, Web servers, or Java application servers, it is a well-established product space. The list below contains a sampling of the major enterprise portal products:

BEA WebLogic Portal
BEA AquaLogic Interaction Portal
Oracle Portal
Microsoft Sharepoint Portal
IBM Websphere Portal
Vignette Portal
Sun Portal

As with any enterprise software product, a software selection process is necessary to decide which portal platform is right for your enterprise. Data sheets and white papers are available from each vendor to help with the decision process.

Sunday, May 25, 2008

XStream

XStream is a simple library to serialize objects to XML and back again.

Features :
1. Ease of use. A high level facade is supplied that simplifies common use cases.
2. No mappings required. Most objects can be serialized without need for specifying mappings.
3. Performance. Speed and low memory footprint are a crucial part of the design, making it suitable for large object graphs or systems with high message throughput.
4. Clean XML. No information is duplicated that can be obtained via reflection. This results in XML that is easier to read for humans and more compact than native Java serialization.
5. Requires no modifications to objects. Serializes internal fields, including private and final. Supports non-public and inner classes. Classes are not required to have default constructor.
6. Full object graph support. Duplicate references encountered in the object-model will be maintained. Supports circular references.
7. Alternative output format. The modular design allows other output formats. XStream ships currently with JSON support and morphing.
8. Integrates with other XML APIs. By implementing an interface, XStream can serialize directly to/from any tree structure (not just XML).

Where We USe :
Transport
Persistence
Configuration
Unit Testing

Download :
XStream

Saturday, May 24, 2008

BPM Value Objects - Weblogic

Three BPM packages like com.bea.wlpi.common, com.bea.wlpi.common.security, and com.bea.eci.repository.helper—provide classes, or value objects, for obtaining object data at both definition and run time. For more information about each of these packages, see BPM API.

Each value object shares the following characteristics:
Maintains various BPM server-side objects, including session EJBs (for example, templates, template definitions, and business calendars) and entity EJBs that are used internally. Also allows you to obtain data values from these objects.

Is represented by an individual Java class, the members of which are collectively referred to as values

Is serializable: can be exchanged between client and server overrides the equals() method for testing two objects of the same type for equality, as follows: public boolean equals(Object obj)

Implements the java.lang.comparable interface for comparing two objects of the same type, as follows: public int compareTo(Object obj)

When part of a homogeneous list, a class can be searched and sorted using the following methods:

java.util.Collection.contains(Object o)

java.util.List.indexOf(Object o)

java.util.Collections.sort(List list)

java.util.Collections.sort(List list, Comparator c)


If the natural ordering of an object (as implemented by the boolean compareTo(Object o) method) is based on the same field used by the boolean equals(Object o) method, the following method

int java.util.Collections.binarySearch(List list, Object o)

can be used for rapidly searching a list that was sorted earlier, using the java.util.Collections.sort(List list) method.

Implement the com.bea.wlpi.common.Publishable interface, if the import and export of the object data is supported.

Note: If the importing and exporting of data is supported, the object also implements the com.bea.wlpi.common.Publishable interface. For more information, see Publishing Workflow Objects.

The following table lists the value objects that can be used to access object data.


Value Object To access

com.bea.wlpi.common.BusinessCalendarInfo Business calendar data

com.bea.wlpi.common.EventKeyInfo Event key data

com.bea.wlpi.common.InstanceInfo Workflow instance data

com.bea.wlpi.common.OrganizationInfo Organization data

com.bea.wlpi.common.security.PermissionInfo Permission data

com.bea.wlpi.common.RepositoryFolderHelperInfo XML repository folder data

com.bea.eci.repository.helper.RepositoryFolderInfo XML repository folder data

com.bea.wlpi.common.RerouteInfo Task rerouting data

com.bea.wlpi.common.RoleInfo Role data

com.bea.wlpi.common.RolePermissionInfo Role permission data

com.bea.wlpi.common.TaskInfo Workflow task data

com.bea.wlpi.common.TemplateDefinitionInfo Template definition data

com.bea.wlpi.common.TemplateInfo Template data

com.bea.wlpi.common.UserInfo User data

com.bea.wlpi.common.security.UserPermissionInfo User permission data

com.bea.wlpi.common.VariableInfo Variable data

com.bea.wlpi.common.VersionInfo Version number data

com.bea.wlpi.common.XMLEntityHelperInfo XML repository entity data

com.bea.eci.repository.helper.XMLEntityInfo XML repository entity data


How Creating Value Objects :

To create a value object, use the associated constructor. Each of the BPM value objects described in the table Value Objects, provides one or more constructors for creating object data. The constructors for creating value objects are described in Value Object Summary.

For example, the following code creates an OrganizationInfo object, sets the organization ID to ORG1, and assigns the resulting object to organization.

OrganizationInfo organization = new OrganizationInfo("ORG1");