Tested creating tables with some scripts we have here. Testing was against: Oracle 10g, SQL Server 2005, and MySQL 5x.
Somehow Oracle is this least intuitive.
I need some links for next time:
Connecting to SQL Plus
SQL Plus FAQ
Deleting tables (called dropping in Oracle)
Creating and dropping sequences
Creating a Trigger
Using a *.sql script from within SQL Plus
Dropping a trigger
Seems a lot more complex than MySQL or SQL Plus
Thursday, October 07, 2010
Monday, October 04, 2010
We're a bearer token company
It was mentioned in a meeting in my first week that we're a bearer token company. The more I learn about my job and the different aspects of our company, the more I realize how true it is.
I've used tokens in the past and they have many definitions. Just type define: token in Google search.
Tokens in computer software - and in our company - provide identity delegation. They can be proxies for you so you don't have to keep logging into different sites with the same username/password, which is the idea behind SSO (Single-Sign On).
A user logs in once and a token is then used in their place when they want to use a service that's outside of their initial domain where they logged in first. Ping Federate bears the brunt of creating the token and passing it between disparate domains, saving the expense. For example, a user would login to United Airlines to buy a round-trip ticket. That user would also need to rent a car so United would use our SSO Ping Federate solution to allow a token to be passed from United to Hertz so the user could rent a car and not have to login again when transfered to the Hertz rental site.
Since tokens are the basis of communication between our federated servers, they can be quite simple, from passing on basic user credentials, or the more complex, providing further information or attributes to a requesting service provider.
In the beginning I had a bit of confusion with some terms that really meant the same thing. Since a lot of terms overlap I'll just list them here.
We use SAML tokens. These are the same as SAML assertions (or to be exact, are enclosed within a SAML message), which are basically XML documents sent over the wire. Whether you use token, assertion, SAML message, SAML assertion, XML assertion, in many ways, they are all synonymous terms. They provide the mechanism to proxy for an initial sign-in and pass that sign-on to another domain. By passing on the initial sign-on we are forwarding assertions about the initial users credentials.
Assertions are also called claims. Since tokens contain assertions/claims, you can have simple claims that state who you are (basic authentication) to more extensive assertions/tokens/claims (see how I interchange them?) that also authenticate you and/or provide more attributes about who you are. It all depends on what you want an assertion/token to do and what is requested by the service you are trying to access.
This is the basis of what we do.
I've used tokens in the past and they have many definitions. Just type define: token in Google search.
Tokens in computer software - and in our company - provide identity delegation. They can be proxies for you so you don't have to keep logging into different sites with the same username/password, which is the idea behind SSO (Single-Sign On).
A user logs in once and a token is then used in their place when they want to use a service that's outside of their initial domain where they logged in first. Ping Federate bears the brunt of creating the token and passing it between disparate domains, saving the expense. For example, a user would login to United Airlines to buy a round-trip ticket. That user would also need to rent a car so United would use our SSO Ping Federate solution to allow a token to be passed from United to Hertz so the user could rent a car and not have to login again when transfered to the Hertz rental site.
Since tokens are the basis of communication between our federated servers, they can be quite simple, from passing on basic user credentials, or the more complex, providing further information or attributes to a requesting service provider.
In the beginning I had a bit of confusion with some terms that really meant the same thing. Since a lot of terms overlap I'll just list them here.
We use SAML tokens. These are the same as SAML assertions (or to be exact, are enclosed within a SAML message), which are basically XML documents sent over the wire. Whether you use token, assertion, SAML message, SAML assertion, XML assertion, in many ways, they are all synonymous terms. They provide the mechanism to proxy for an initial sign-in and pass that sign-on to another domain. By passing on the initial sign-on we are forwarding assertions about the initial users credentials.
Assertions are also called claims. Since tokens contain assertions/claims, you can have simple claims that state who you are (basic authentication) to more extensive assertions/tokens/claims (see how I interchange them?) that also authenticate you and/or provide more attributes about who you are. It all depends on what you want an assertion/token to do and what is requested by the service you are trying to access.
This is the basis of what we do.
Friday, October 01, 2010
find and locate
I've been using Unix find on the command line for many years and it's certainly saved me some time trying to find files or strings within files.
Here's an example usage that I've liked to use:
find . -name "*.java" -exec grep -l "Boolean" {} \;
This will find all java files from the current directory down that contain the string Boolean and then list their paths.
For something a little more basic where I just need to search for file extension types I use something like this:
find . -name "*.properties" -print
But, one thing I never heard about until yesterday was the locate command.
Locate uses indexed search results and was helpful when I was trying to find some JAR files in a local repository.
Basically it takes find and cron together, run at scheduled intervals, to index the local filesystem. Then when you use locate, it will use that index to quickly give back results, that might have taken long with just the find command.
Here's an example usage that I've liked to use:
find . -name "*.java" -exec grep -l "Boolean" {} \;
This will find all java files from the current directory down that contain the string Boolean and then list their paths.
For something a little more basic where I just need to search for file extension types I use something like this:
find . -name "*.properties" -print
But, one thing I never heard about until yesterday was the locate command.
Locate uses indexed search results and was helpful when I was trying to find some JAR files in a local repository.
Basically it takes find and cron together, run at scheduled intervals, to index the local filesystem. Then when you use locate, it will use that index to quickly give back results, that might have taken long with just the find command.
Sunday, September 26, 2010
Mystery of the Nile bluray
Watched for a second time the IMAX movie Mystery of the Nile on Bluray.
Out of five IMAX blurays I own this is one of my favorites.
I like the raft journey from Lake Tana in Ethiopia to the mouth of the Nile at Alexandria.
They visit some amazing sites like rock monolith churches in Lalibela, the Meroe pyramids in Sudan where the Kingdom of Kush existed around 1500 B.C., and Luxor, Egypt.
Doing a third to half of what they rafted or maybe puddle jumping with a plane from some of the highlights would be quite cool.
Out of five IMAX blurays I own this is one of my favorites.
I like the raft journey from Lake Tana in Ethiopia to the mouth of the Nile at Alexandria.
They visit some amazing sites like rock monolith churches in Lalibela, the Meroe pyramids in Sudan where the Kingdom of Kush existed around 1500 B.C., and Luxor, Egypt.
Doing a third to half of what they rafted or maybe puddle jumping with a plane from some of the highlights would be quite cool.
Friday, September 24, 2010
digital certificates, keys, and message validation
I've been exploring digital certificates a little more in depth this week. They're basically analogous to a "drivers license" in the digital domain.
A drivers license basically allows you to validate who you are (e.g, a face to a name, your age, etc...) to an authority who would like to authentiate and authorize you (maybe a policeman or door man) based on their trust in the authority that issued your license; usually the DMV. Because the license is hard to duplicate and has a certain format (being laminated, has your photo, and a magnetic strip and/or watermarks) the authority has provided strong backing validation when you present your license to third parties like a policeman or other authority figures. Therefore trust and authentication are - hopefully - maintained when you use a license.
An equivalent type of license that validates who you are and is used over the Internet is called a Digital Certificate. Although it might not validate your age, it will help validate a message or document that you might send along with it to the person you are sending it to.
Obviously, drivers licenses offer a lot of private information when you view them: photo, age, address, etc... You keep your license information private by keeping it in your wallet. Digital certificates don't have the same information. They do provide some identity information of who the individual or server is just like a drivers license, but unlike a drivers license a digital certificate provides a mechanism to encypt an exchange (aka transaction) between a client and a server; and itself can be encrypted too.
And since sensitive transactions happen over the Internet, where a sophisticated individual can intercept your credit card information, encryption needs to happen for almost all stages of a transaction (or depending on your level of need).
In this case, a digital certificate is used by a server to authenticate who they are to a client browser. Amazon, for example, sends you a digital certificate to authenticate itself to your browser and then encrypts your shopping cart transaction, etc...
A drivers license basically allows you to validate who you are (e.g, a face to a name, your age, etc...) to an authority who would like to authentiate and authorize you (maybe a policeman or door man) based on their trust in the authority that issued your license; usually the DMV. Because the license is hard to duplicate and has a certain format (being laminated, has your photo, and a magnetic strip and/or watermarks) the authority has provided strong backing validation when you present your license to third parties like a policeman or other authority figures. Therefore trust and authentication are - hopefully - maintained when you use a license.
An equivalent type of license that validates who you are and is used over the Internet is called a Digital Certificate. Although it might not validate your age, it will help validate a message or document that you might send along with it to the person you are sending it to.
Obviously, drivers licenses offer a lot of private information when you view them: photo, age, address, etc... You keep your license information private by keeping it in your wallet. Digital certificates don't have the same information. They do provide some identity information of who the individual or server is just like a drivers license, but unlike a drivers license a digital certificate provides a mechanism to encypt an exchange (aka transaction) between a client and a server; and itself can be encrypted too.
And since sensitive transactions happen over the Internet, where a sophisticated individual can intercept your credit card information, encryption needs to happen for almost all stages of a transaction (or depending on your level of need).
In this case, a digital certificate is used by a server to authenticate who they are to a client browser. Amazon, for example, sends you a digital certificate to authenticate itself to your browser and then encrypts your shopping cart transaction, etc...
Tuesday, September 21, 2010
New to Cobertura
Finally got to use Cobertura. I initially started working with the 1.9 build but then decided to use 1.9.4.1, which worked quite well. If I can, I'll always use the latest version.
Took me about a week of experimenting with it - in addition to other duties - to get a handle on the process for creating accurate Cobertura coverage reports.
Some of the things that got me during the initial week were: 1) 100% coverage reports (when I knew there wasn't 100% coverage), 2) using the ant tasks effectively, 3) having an incomplete cobertura.ser be generated by JBoss (in addition to a cobertura.ser.lock file), and 4) having cobertura work with my Eclipse instance where I'd run my junit/htmlunit tests.
For #1 I had to merge the originally created cobertura.ser files (when I instrumented the JAR files I needed) with the one created when running JBoss and the htmlunit tests through the Eclipse IDE. I ultimately used the cobertura ANT task cobertura-instrument to explicitly create a ser datafile while instrumenting, and then used cobertura-merge to merge the ser files together after running my tests.
For #2, I got to know the Ant Task Reference page from the cobertura sourceforge page, which was helpful.
For #3, I'd get an incomplete ser file created after running my htmlunit test cases against a running JBoss server with my deployed application. I'd always have a cobertura.ser.lock file too. One thing I noted was if I had been running multiple test suites after each other and data.zip files would be created. I would try to delete old ones and that seemed to solve most of my lock file issues.
For #4, I had to have Eclipse Helios with the development packages, along with the test packages I was using to test the application's admin interface. I was able to run suite test classes that would have up to 700 tests by running as Junit, and of course, I made sure JBoss was up and running with my deployed application; either created from my dev folder using a maven command or getting a build from Hudson. Either way worked.
Took me about a week of experimenting with it - in addition to other duties - to get a handle on the process for creating accurate Cobertura coverage reports.
Some of the things that got me during the initial week were: 1) 100% coverage reports (when I knew there wasn't 100% coverage), 2) using the ant tasks effectively, 3) having an incomplete cobertura.ser be generated by JBoss (in addition to a cobertura.ser.lock file), and 4) having cobertura work with my Eclipse instance where I'd run my junit/htmlunit tests.
For #1 I had to merge the originally created cobertura.ser files (when I instrumented the JAR files I needed) with the one created when running JBoss and the htmlunit tests through the Eclipse IDE. I ultimately used the cobertura ANT task cobertura-instrument to explicitly create a ser datafile while instrumenting, and then used cobertura-merge to merge the ser files together after running my tests.
For #2, I got to know the Ant Task Reference page from the cobertura sourceforge page, which was helpful.
For #3, I'd get an incomplete ser file created after running my htmlunit test cases against a running JBoss server with my deployed application. I'd always have a cobertura.ser.lock file too. One thing I noted was if I had been running multiple test suites after each other and data.zip files would be created. I would try to delete old ones and that seemed to solve most of my lock file issues.
For #4, I had to have Eclipse Helios with the development packages, along with the test packages I was using to test the application's admin interface. I was able to run suite test classes that would have up to 700 tests by running as Junit, and of course, I made sure JBoss was up and running with my deployed application; either created from my dev folder using a maven command or getting a build from Hudson. Either way worked.
Tuesday, August 24, 2010
Creating keystore for Tomcat 6
I've been training on Ping Identity's Ping Federate server the last two weeks. This week I'm setting up separate Tomcat servers to interact with the Ping Federate servers; on both the identity and service provider servers. Before, I hosted the quickstart apps on the same server instance as Ping Federate.
To setup SSL with Tomcat I needed to edit the server.xml file under the conf directory and then create a keystore file that Tomcat will use to verify trusted certificates sent from the federated servers.
The initial command I ran was:
>>>> : keytool -genkey -alias amf -keyalg RSA -keystore tomcat.keystore [-file 111C353A88F.crt]
The last part of the command with the -file command line argument wasn't needed. I basically created a keystore file named tomcat.keystore and a new self-signed certificate where the Java keytool prompted me with some questions listed below:
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: Andrew Fernandez
What is the name of your organizational unit?
[Unknown]: Engineering
What is the name of your organization?
[Unknown]: Ping Identity
What is the name of your City or Locality?
[Unknown]: Denver
What is the name of your State or Province?
[Unknown]: CO
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=Andrew Fernandez, OU=Engineering, O=Ping Identity, L=Denver, ST=CO, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
After this I verified what was in my keystore using this command:
>>>> : keytool -list -v -keystore tomcat.keystore
Since I didn't yet have the certificate from the federate side I then ran this command, making sure I had the *.crt file I needed to place in the keystore in the same directory:
>>>>> : keytool -import -trustcacerts -alias amf2 -file 111C353A88F.crt -keystore tomcat.keystore
Finally I use export the base certificate from Tomcat and import into my Federated server
>>>>> : keytool -exportcert -alias amf -file amfIDP -keystore tomcat.keystore
Where -alias is what the certificate goes by in the keystore file, -file is what I want to call the exported certifacate, and -keystore the actually keystore where I'll be getting the certifcate to export from.
Deletion is pretty straighforward:
>>>>> : keytool -delete -alias amf -keystore tomcat.keystore
Just specify -delete, -alias the certificate you want to delete, and the keystore you want to delete from
To setup SSL with Tomcat I needed to edit the server.xml file under the conf directory and then create a keystore file that Tomcat will use to verify trusted certificates sent from the federated servers.
The initial command I ran was:
>>>> : keytool -genkey -alias amf -keyalg RSA -keystore tomcat.keystore [-file 111C353A88F.crt]
The last part of the command with the -file command line argument wasn't needed. I basically created a keystore file named tomcat.keystore and a new self-signed certificate where the Java keytool prompted me with some questions listed below:
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: Andrew Fernandez
What is the name of your organizational unit?
[Unknown]: Engineering
What is the name of your organization?
[Unknown]: Ping Identity
What is the name of your City or Locality?
[Unknown]: Denver
What is the name of your State or Province?
[Unknown]: CO
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=Andrew Fernandez, OU=Engineering, O=Ping Identity, L=Denver, ST=CO, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
After this I verified what was in my keystore using this command:
>>>> : keytool -list -v -keystore tomcat.keystore
Since I didn't yet have the certificate from the federate side I then ran this command, making sure I had the *.crt file I needed to place in the keystore in the same directory:
>>>>> : keytool -import -trustcacerts -alias amf2 -file 111C353A88F.crt -keystore tomcat.keystore
Finally I use export the base certificate from Tomcat and import into my Federated server
>>>>> : keytool -exportcert -alias amf -file amfIDP -keystore tomcat.keystore
Where -alias is what the certificate goes by in the keystore file, -file is what I want to call the exported certifacate, and -keystore the actually keystore where I'll be getting the certifcate to export from.
Deletion is pretty straighforward:
>>>>> : keytool -delete -alias amf -keystore tomcat.keystore
Just specify -delete, -alias the certificate you want to delete, and the keystore you want to delete from
Thursday, August 12, 2010
Mac OS X differences from Windows
Having started the new job at Ping Identity I now have a MacBook Pro with OS X.
I've had to get used to two notable differences:
1) Window resizing on the OS X is only on the lower right-hand corner versus Windows XP which can be on the bottom and sides too.
2) Copying and pasting in the Finder (which is the equivalent of Windows Explorer) is not like Windows. If I chose a file and copy it I cannot just highlight a new directory and choose paste. I actually have to enter into the directory and paste making sure the dialog title for the Finder lists the correct directory where I want to paste.
At least that's what works for me without any special changes to the OS.
I've had to get used to two notable differences:
1) Window resizing on the OS X is only on the lower right-hand corner versus Windows XP which can be on the bottom and sides too.
2) Copying and pasting in the Finder (which is the equivalent of Windows Explorer) is not like Windows. If I chose a file and copy it I cannot just highlight a new directory and choose paste. I actually have to enter into the directory and paste making sure the dialog title for the Finder lists the correct directory where I want to paste.
At least that's what works for me without any special changes to the OS.
Tuesday, July 27, 2010
Web services WSDLs and SOAP bindings
I've been testing web services for 4 years now.
Sometimes it's good to be reminded of the basics when you're explaining JAX-RPC or JAX-WS web services.
Ultimately, when you specify RPC or documental literal wrapped you're stating how the message will be translated to the SOAP envelope that is transmitted over the wire (AKA internet).
In the WSDL you specify the binding or what kind of translation you want in the binding node of the WSDL.
Then that style is used when the message is sent inside the SOAP envelope; specifically the SOAP body.
This IBM article states it well in the first paragraph.
http://www.ibm.com/developerworks/webservices/library/ws-whichwsdl/
Sometimes it's good to be reminded of the basics when you're explaining JAX-RPC or JAX-WS web services.
Ultimately, when you specify RPC or documental literal wrapped you're stating how the message will be translated to the SOAP envelope that is transmitted over the wire (AKA internet).
In the WSDL you specify the binding or what kind of translation you want in the binding node of the WSDL.
Then that style is used when the message is sent inside the SOAP envelope; specifically the SOAP body.
This IBM article states it well in the first paragraph.
http://www.ibm.com/developerworks/webservices/library/ws-whichwsdl/
Friday, June 18, 2010
Handler Chains: New schema namespace and root node
There's been a change in what is accepted for a handler chain XML file within OEPE, and possibly the WLS runtime libraries.
It used to be accepted to use http://www.bea.com/ns/weblogic/90" for the default namespace but you'll need to start using: java.sun.com/xml/ns/javaee
Also the root node needed for handler chain files needs to be: handler-chains instead of the old weblogic-wsee-clientHandlerChain.
Just to be clear here is an old example file:
<?xml version="1.0" encoding="UTF-8"?>
<weblogic-wsee-clientHandlerChain
xmlns="http://www.bea.com/ns/weblogic/90"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:j2ee="http://java.sun.com/xml/ns/j2ee">
<handler>
<j2ee:handler-name>clienthandler1</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler1
</j2ee:handler-class>
<j2ee:init-param>
<j2ee:param-name>ClientParam1</j2ee:param-name>
<j2ee:param-value>value1</j2ee:param-value>
</j2ee:init-param>
</handler>
<handler>
<j2ee:handler-name>clienthandler2</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler2
</j2ee:handler-class>
</handler>
</weblogic-wsee-clientHandlerChain>
And the newer one:
<?xml version="1.0" encoding="UTF-8"?>
<handler-chains
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:j2ee="http://java.sun.com/xml/ns/j2ee">
<handler>
<j2ee:handler-name>clienthandler1</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler1
</j2ee:handler-class>
<j2ee:init-param>
<j2ee:param-name>ClientParam1</j2ee:param-name>
<j2ee:param-value>value1</j2ee:param-value>
</j2ee:init-param>
</handler>
<handler>
<j2ee:handler-name>clienthandler2</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler2
</j2ee:handler-class>
</handler>
</handler-chains>
It used to be accepted to use http://www.bea.com/ns/weblogic/90" for the default namespace but you'll need to start using: java.sun.com/xml/ns/javaee
Also the root node needed for handler chain files needs to be: handler-chains instead of the old weblogic-wsee-clientHandlerChain.
Just to be clear here is an old example file:
<?xml version="1.0" encoding="UTF-8"?>
<weblogic-wsee-clientHandlerChain
xmlns="http://www.bea.com/ns/weblogic/90"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:j2ee="http://java.sun.com/xml/ns/j2ee">
<handler>
<j2ee:handler-name>clienthandler1</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler1
</j2ee:handler-class>
<j2ee:init-param>
<j2ee:param-name>ClientParam1</j2ee:param-name>
<j2ee:param-value>value1</j2ee:param-value>
</j2ee:init-param>
</handler>
<handler>
<j2ee:handler-name>clienthandler2</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler2
</j2ee:handler-class>
</handler>
</weblogic-wsee-clientHandlerChain>
And the newer one:
<?xml version="1.0" encoding="UTF-8"?>
<handler-chains
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:j2ee="http://java.sun.com/xml/ns/j2ee">
<handler>
<j2ee:handler-name>clienthandler1</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler1
</j2ee:handler-class>
<j2ee:init-param>
<j2ee:param-name>ClientParam1</j2ee:param-name>
<j2ee:param-value>value1</j2ee:param-value>
</j2ee:init-param>
</handler>
<handler>
<j2ee:handler-name>clienthandler2</j2ee:handler-name>
<j2ee:handler-class>
helloservice.ClientHandler2
</j2ee:handler-class>
</handler>
</handler-chains>
Friday, June 04, 2010
JAXB reference implementation and MOXy
JAXB is an important part of JAX-WS web services. It provides object to XML mapping and the reverse.
In OEPE, a wizard using WLS ant scripts has been used for the last two plus years to create JAXB types using Glassfish libraries. Glassfish is the reference implementation Java EE server (currently version 3).
The reference implementation (RI) for JAXB resides under Metro, the web services stack within Glassfish.
Since the JAXB RI under Metro provides great basic functionality for JAXB marshalling/unmarshalling of types and not a lot of add-on features/extensions, there are further implementations that can be used.
One resides under the EclipseLink persistent services framework and it's called MOXy.
MOXy provides support for mapping between schema and Java types, more specialized JPA mapping support and some better performance, to name a few features.
Here's a link below:
http://wiki.eclipse.org/EclipseLink/FAQ/WhatIsMOXy
In OEPE, a wizard using WLS ant scripts has been used for the last two plus years to create JAXB types using Glassfish libraries. Glassfish is the reference implementation Java EE server (currently version 3).
The reference implementation (RI) for JAXB resides under Metro, the web services stack within Glassfish.
Since the JAXB RI under Metro provides great basic functionality for JAXB marshalling/unmarshalling of types and not a lot of add-on features/extensions, there are further implementations that can be used.
One resides under the EclipseLink persistent services framework and it's called MOXy.
MOXy provides support for mapping between schema and Java types, more specialized JPA mapping support and some better performance, to name a few features.
Here's a link below:
http://wiki.eclipse.org/EclipseLink/FAQ/WhatIsMOXy
Monday, April 19, 2010
JSF message bundles and locales
Java Server Faces supports localization or internationalization (I18N). You choose the term. I find them to be synonymous terms and will use them interchangeably.
If you want to split hairs go here:
http://www.w3.org/International/questions/qa-i18n
I18N allows you to accomodate your software to different languages whether it be English, German or Chinese; thereby supporting web clients around the world.
One easy way to make JSP localization work is use it in a JSF enabled dynamic web project inside an OEPE enabled Eclipse IDE.
You'll need to update the faces-config.xml file, adding properties files that you've made and locales that you support.
Here's a faces-config.xml file example:
<faces-config version="1.2">
<application>
<message-bundle>resources.application</message-bundle>
<message-bundle>resources.Greeting_fr</message-bundle>
<message-bundle>resources.Greeting_en</message-bundle>
<message-bundle>resources.Greeting_de</message-bundle>
<locale-config>
<default-locale>en</default-locale>
<supported-locale>fr</supported-locale>
<supported-locale>de</supported-locale>
</locale-config>
</application>
</faces-config>
Each message-bundle element tag lists a language locale property file. In my case I put the properties files under src/resources. Note that the properties files listed inside the message-bundle tags don't come with *.properties file extension. It's implied.
Under locale-config I place supported languages. In my case I have English, French and German. My default is English.
The great thing about OEPE is the ability to add message-bundle and locale-config elements to the faces-config.xml file by using the Faces Configuration Editor. You can easily add new languages that are supported and browse for new language message bundles.
When I do want to use message bundles I can test it with the loadBundle tag and outputFormat tags inside a JSF enabled JSP page.
Here's an excerpt from the JSP page:
<f:loadBundle basename="resources.Greeting" var="greeting1" />
<h:outputFormat value="#{greeting1['login']}">
<f:param value="Joe"></f:param>
<f:param value="05/19/2010"></f:param>
</h:outputFormat>
<h:outputFormat value="#{greeting1['welcome']}"></h:outputFormat>
Note that I specify "resources.Greeting" for basename.
The first part before the dot is the folder and the second half means all the Greeting_*.properties files.
My properties files are:
1) Greeting_de.properties
2) Greeting_en.properties
3) Greeting_fr.properties.
I don't need to add the underscore.properties to basename since it's implied, meaning any file will be used for a localized message depending on the client preferred language.
The country will be figured out by what the browser passes to the server. In Internet Explorer I can change my preferred language and see a different message. This is done under Tools > Internet Preferences > General > Languages. I can easily add and rearrange the preferred languages within that Internet Explorer dialog and reload the web page to see the language change on the fly.
If you want to split hairs go here:
http://www.w3.org/International/questions/qa-i18n
I18N allows you to accomodate your software to different languages whether it be English, German or Chinese; thereby supporting web clients around the world.
One easy way to make JSP localization work is use it in a JSF enabled dynamic web project inside an OEPE enabled Eclipse IDE.
You'll need to update the faces-config.xml file, adding properties files that you've made and locales that you support.
Here's a faces-config.xml file example:
<faces-config version="1.2">
<application>
<message-bundle>resources.application</message-bundle>
<message-bundle>resources.Greeting_fr</message-bundle>
<message-bundle>resources.Greeting_en</message-bundle>
<message-bundle>resources.Greeting_de</message-bundle>
<locale-config>
<default-locale>en</default-locale>
<supported-locale>fr</supported-locale>
<supported-locale>de</supported-locale>
</locale-config>
</application>
</faces-config>
Each message-bundle element tag lists a language locale property file. In my case I put the properties files under src/resources. Note that the properties files listed inside the message-bundle tags don't come with *.properties file extension. It's implied.
Under locale-config I place supported languages. In my case I have English, French and German. My default is English.
The great thing about OEPE is the ability to add message-bundle and locale-config elements to the faces-config.xml file by using the Faces Configuration Editor. You can easily add new languages that are supported and browse for new language message bundles.
When I do want to use message bundles I can test it with the loadBundle tag and outputFormat tags inside a JSF enabled JSP page.
Here's an excerpt from the JSP page:
<f:loadBundle basename="resources.Greeting" var="greeting1" />
<h:outputFormat value="#{greeting1['login']}">
<f:param value="Joe"></f:param>
<f:param value="05/19/2010"></f:param>
</h:outputFormat>
<h:outputFormat value="#{greeting1['welcome']}"></h:outputFormat>
Note that I specify "resources.Greeting" for basename.
The first part before the dot is the folder and the second half means all the Greeting_*.properties files.
My properties files are:
1) Greeting_de.properties
2) Greeting_en.properties
3) Greeting_fr.properties.
I don't need to add the underscore
The country will be figured out by what the browser passes to the server. In Internet Explorer I can change my preferred language and see a different message. This is done under Tools > Internet Preferences > General > Languages. I can easily add and rearrange the preferred languages within that Internet Explorer dialog and reload the web page to see the language change on the fly.
Monday, April 12, 2010
Importance of quality, time and features
In an ideal world quality, time and features should be given equal weight in the software development lifecycle.
The ranking for most firms it seems is:
1) quality
2) time
3) features.
Quality should always be given precedence. If things need to go, due to scheduling, etc..., you need to throw out features first, then time and always try to maintain quality standards.
Makes sense and we practice that here too.
Tuesday, March 16, 2010
Want to upgrade OEPE/Eclipse?
I've been testing upgrade of OEPE from one version to another.
Upgrade might not be as intuitive as for other applications that could be as easy as one-click.
To get the latest/greatest you need to add an upgrade site under Help > Install New Software > Add.
For Galileo builds (which is the latest/greatest that we support) people should use this URL:
http://download.oracle.com/otn_software/oepe/galileo/
Then you can use the Check for Upgrades choice.
One thing that some people might need to do is update their proxy settings under Window > Preferences > General > Network Connections. If I'm behind a firewire I choose Manual and then the URL and port
Upgrade might not be as intuitive as for other applications that could be as easy as one-click.
To get the latest/greatest you need to add an upgrade site under Help > Install New Software > Add.
For Galileo builds (which is the latest/greatest that we support) people should use this URL:
http://download.oracle.com/otn_software/oepe/galileo/
Then you can use the Check for Upgrades choice.
One thing that some people might need to do is update their proxy settings under Window > Preferences > General > Network Connections. If I'm behind a firewire I choose Manual and then the URL and port
Friday, February 26, 2010
Hot Swap/Fast Swap on WLS using OEPE
Testing fast swap today using OEPE on WLS server(s).
Part of the day I was wondering if there's a difference in how hot swap works with weblogic-application.xml versus weblogic.xml.
It seemed with some dynamic web projects with a JSP that referenced a POJO fast swap - and fast swap was enabled in the EAR's weblogic-application.xml - messages didn't seem to be logged versus when fastswap was enabled in weblogic.xml in the actual project, they were. Make sense?
Tested fast swap too in debug mode but isn't that kinda redundant? Fast Swap/Hot Swap has been in the debug mode of WLS for a long time right? So how does enabling fast swap when Debug mode essentially hot redeploy?
At least that's my impression from this article:
http://technology.amis.nl/blog/5665/fast-swap-in-weblogic-103-and-jdeveloper-11g-redeploy-after-compile-in-running-application
Part of the day I was wondering if there's a difference in how hot swap works with weblogic-application.xml versus weblogic.xml.
It seemed with some dynamic web projects with a JSP that referenced a POJO fast swap - and fast swap was enabled in the EAR's weblogic-application.xml - messages didn't seem to be logged versus when fastswap was enabled in weblogic.xml in the actual project, they were. Make sense?
Tested fast swap too in debug mode but isn't that kinda redundant? Fast Swap/Hot Swap has been in the debug mode of WLS for a long time right? So how does enabling fast swap when Debug mode essentially hot redeploy?
At least that's my impression from this article:
http://technology.amis.nl/blog/5665/fast-swap-in-weblogic-103-and-jdeveloper-11g-redeploy-after-compile-in-running-application
Thursday, January 28, 2010
Lesson learned from testing new Eclipse server adapters?
I work for Oracle. It's a huge company and even bigger now from the Sun acquisition.
There are a multitude of offices around the world and even with offices many people work from home or some other remote location. My team in particular is spread out from Germany to the West Coast of the United States. We inhabit many time zones and face to face collaboration is rare; except for yearly team get-togethers or review periods where I might see my manager or other colleagues. Weekly meetings on the phone are 10 AM my time but for someone in Germany it's 6 PM or on the West Coast it's 9 AM.
When we do have interaction it is limited to phone, web conferencing, IM (whether internal or use of Yahoo IM), and VNC sessions.
We've been currently testing upgrade from one release of OEPE to another. We support many different scenarios for upgrade including server upgrade. Although not a full upgrade compared to other upgrades - because the focus is on server adapters - there are still things that need to be done properly.
Since this was my first foray into testing server adapter upgrade there were some things I needed to learn by asking my more experienced colleague. Knowledge transfer occurred over IM. Unfortunately, there were a couple of steps missing in the initial transfer since after my testing there was an issue that came up.
While testing I thought the server adapter upgrade process was a little too short. I should have listened to my instincts and maybe asked some more questions since I was not doing a full upgrade to our current BETA product.
Things that needed to be done were edit to JAR files under the plugins directory:
1) org.eclipse.wst.server.discovery
2) org.eclipse.wst.server.ui
The specific file that needed to be edited in both was: serverAdapterSites.xml
Extraneous sites needed to be removed and an internal update site for testing needed to overwrite either the Ganymede or Galileo default sites.
Regardless, if you feel your testing is a little too straightforward you might ask yourself if you're going far enough or doing the correct steps for the test to succeed; especially if you're in an isolated environment where some steps that might be easily communicated from face-to-face interaction might easily be lost through newer means of communication.
There are a multitude of offices around the world and even with offices many people work from home or some other remote location. My team in particular is spread out from Germany to the West Coast of the United States. We inhabit many time zones and face to face collaboration is rare; except for yearly team get-togethers or review periods where I might see my manager or other colleagues. Weekly meetings on the phone are 10 AM my time but for someone in Germany it's 6 PM or on the West Coast it's 9 AM.
When we do have interaction it is limited to phone, web conferencing, IM (whether internal or use of Yahoo IM), and VNC sessions.
We've been currently testing upgrade from one release of OEPE to another. We support many different scenarios for upgrade including server upgrade. Although not a full upgrade compared to other upgrades - because the focus is on server adapters - there are still things that need to be done properly.
Since this was my first foray into testing server adapter upgrade there were some things I needed to learn by asking my more experienced colleague. Knowledge transfer occurred over IM. Unfortunately, there were a couple of steps missing in the initial transfer since after my testing there was an issue that came up.
While testing I thought the server adapter upgrade process was a little too short. I should have listened to my instincts and maybe asked some more questions since I was not doing a full upgrade to our current BETA product.
Things that needed to be done were edit to JAR files under the plugins directory:
1) org.eclipse.wst.server.discovery
2) org.eclipse.wst.server.ui
The specific file that needed to be edited in both was: serverAdapterSites.xml
Extraneous sites needed to be removed and an internal update site for testing needed to overwrite either the Ganymede or Galileo default sites.
Regardless, if you feel your testing is a little too straightforward you might ask yourself if you're going far enough or doing the correct steps for the test to succeed; especially if you're in an isolated environment where some steps that might be easily communicated from face-to-face interaction might easily be lost through newer means of communication.
Tuesday, January 12, 2010
History of APP-INF/lib
I've been working for the last few years at BEA Systems and now at Oracle. Plenty of application development and testing has involved using projects within EARs.
I've always taken APP-INF/lib for granted and didn't look at the history of it. It was provided by BEA Systems for WebLogic Server to easily share libraries and other utility classes. among enclosed projects.
As an aside, another BEAism was the use of web services conversations (begin, continue, end) - easily mocked up with service controls. Where did that go with the new JAX-WS spec?
Going back to APP-INF/lib there was a good writeup of WebLogic to Glassfish.
http://weblogs.java.net/blog/sekhar/archive/2009/03/weblogic_to_gla.html
It will be interesting to see where it goes from here once the merger of Sun and Oracle occurs.
I've always taken APP-INF/lib for granted and didn't look at the history of it. It was provided by BEA Systems for WebLogic Server to easily share libraries and other utility classes. among enclosed projects.
As an aside, another BEAism was the use of web services conversations (begin, continue, end) - easily mocked up with service controls. Where did that go with the new JAX-WS spec?
Going back to APP-INF/lib there was a good writeup of WebLogic to Glassfish.
http://weblogs.java.net/blog/sekhar/archive/2009/03/weblogic_to_gla.html
It will be interesting to see where it goes from here once the merger of Sun and Oracle occurs.
Subscribe to:
Posts (Atom)
Exploring ELK (Elastic) Stack for hack-a-thon
At my current gig, our group finally got to do hack-a-thon week and I joined a team project that tied together a few of the technologies I...
-
I was having the toughest time trying to sync a new folder tree in my depot. I was getting this error: //depot/Some-path/some-sub-path/....
-
When I first started using WebEx 3 years ago for my current job I accidentally set the WebEx One-click meeting topic to my colleagues' n...
-
If you're doing security/cryptographic testing with Java - especially with JMeter - you might encounter errors in your testing where you...